In a SP I want to delete some rows from a Table and after some code insert the deleted rows in the same table.
How can I do it?
Thanks all.
Update:
I have a Table:
SampleTable(Col1, Col2, Col3, Col4)
I want to do that:
DELETE FROM SampleTable
WHERE Col1 = "foo"
-- SOME CODE...
INSERT INTO SampleTable
[DELETED VALUES...]
UPDATE:
Sorry but now I can't see the DB.
The problem is that in the SOME CODE... part, written by others, there is a delete that give me an error, but after the delete there is an insert with the SP input that replaced the deleted row with the same key.
I know that an UPDATE apparently solve my problem but there is a lot of logic and I don't want to change the SOME CODE... part, so I'm looking for a workaround, and so I want to temporary ignore foreign key
select * into #ttable FROM SampleTable
WHERE Col1 = "foo"
DELETE FROM SampleTable
WHERE Col1 = "foo"
-- SOME CODE...
INSERT INTO SampleTable
select * from #ttable
Deleting and re-inserting the rows can introduce all sorts of problems. For instance, identity() values will change (as well as automatically assigned creation times). In addition, you might have constraints. In theory, anything could happen to the database between the deletion and re-insertion, so constraints that once worked might fail.
How about creating a view?
create view v_SampleTable as
select *
from SampleTable
where col1 = 'foo' or col1 is null;
Then change the code to use v_SampleTable instead of SampleTable. This is an updatable view, so it will even permit modifications to the data inside the table.
You could go even one step further and rename the table first and then create a view with the same name.
Related
I have had a look at similar problems, however none of the answers helped in my case.
Just a little bit of background. I have Two databases, both have the same table with the same fields and structure. Data already exists in both tables. I want to overwrite and add to the data in db1.table from db2.table the primary ID is causing a problem with the update.
When I use the query:
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT table.field_id,table.field1,table.field2
FROM table;
It works to a blank table, because none of the primary keys exist. As soon as the primary key exists it fails.
Would it be easier for me to overwrite the primary keys? or find the primary key and update the fields related to the field_id? Im really not sure how to go ahead from here. The data needs to be migrated every 5min, so possibly a stored procedure is required?
first you should try to add new records then update all records.you can create a procedure like below code
PROCEDURE sync_Data(a IN NUMBER ) IS
BEGIN
insert into db2.table
select *
from db1.table t
where t.field_id not in (select tt.field_id from db2.table tt);
begin
for t in (select * from db1.table) loop
update db2.table aa
set aa.field1 = t.field1,
aa.field2 = t.field2
where aa.field_id = t.field_id;
end loop;
end;
END sync_Data
Set IsIdentity to No in Identity Specification on the table in which you want to move data, and after executing your script, set it to Yes again
I ended up just removing the data in the new database and sending it again.
DELETE FROM db2.table WHERE db2.table.field_id != 0;
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT table.field_id,table.field1,table.field2
FROM table;
Its not very efficient, but gets the job done. I couldnt figure out the syntax to correctly do an UPDATE or to change the IsIdentity field within MariaDB, so im not sure if they would work or not.
The overhead of deleting and replacing non-trivial amounts of data for an entire table will be prohibitive. That said I'd prefer to update in place (merge) over delete /replace.
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT t.field_id,t.field1,t.field2
FROM table t
ON DUPLICATE KEY UPDATE field1 = t.field1, field2 = t.field2
This can be used inside a procedure and called every 5 minutes (not recommended) or you could build a trigger that fires on INSERT and UPDATE to keep the tables in sync.
INSERT INTO database1.tabledata SELECT * FROM database2.tabledata;
But you have to keep length of varchar length larger or equal to database2 and keep the same column name
I've got some duplicate records in a table because as it turns out Netezza does not support constraint checks on primary keys. That being said, I have some records where the information is the exact same and I want to delete just ONE of them. I've tried doing
delete from table_name where test_id=2025 limit 1
and also
delete from table_name where test_id=2025 rowsetlimit 1
However neither option works. I get an error saying
found 'limit'. Expecting a keyword
Is there any way to limit the records deleted by this query? I know I could just delete the record and reinsert it but that is a little tedious since I will have to do this multiple times.
Please note that this is not SQL Server or MySQL.This is for Netezza
If it doesn't support either "DELETE TOP 1" or the "LIMIT" keyword, you may end up having to do one of the following:
1) add some sort of an auto-incrementing column (like IDs), making each row unique. I don't know if you can do that in Netezza after the table has been created, though.
2) Programmatically read the entire table with some programming language, eliminate duplicates programmatically, then deleting all the rows and inserting them again. This might not be possible if they are references by other tables, in which case, you might have to temporarily remove the constraint.
I hope that helps. Please let us know.
And for future reference; this is why I personally always create an auto-incrementing ID field, even if I don't think I'll ever use it. :)
The below query works for deleting duplicates from a table.
DELETE FROM YOURTABLE
WHERE COLNAME1='XYZ' AND
(
COLNAME1,
ROWID
)
NOT IN
(
SELECT COLNAME1,
MAX(ROWID)
FROM YOURTABLENAME
WHERE COLNAME = 'XYZ'
GROUP BY COLNAME1
)
If the records are identical then you could do something like
CREATE TABLE DUPES as
SELECT col11,col2,col3,col....... coln from source_table where test_id = 2025
group by
1,2,3..... n
DELETE FROM source_table where test_id = 2025
INSERT INTO Source_table select * from duoes
DROP TABLE DUPES
You could even create a sub-query to select all the test_ids HAVING COUNT(*) > 1 to automatically find the dupes in steps 1 and 3
-- remove duplicates from the <<TableName>> table
delete from <<TableName>>
where rowid not in
(
select min(rowid) from <<TableName>>
group by (col1,col2,col3)
);
The GROUP BY 1,2,3,....,n will eliminate the dupes on the insert to the temp table
Does the use rowid is allowed in Netezza...As far as my knowledge is concern i don't think this query will executed in Netezza...
We have a status table. When the status changes we currently delete the old record and insert a new.
We are wondering if it would be faster to do a select to check if it exists followed by an insert or update.
Although similar to the following question, it is not the same, since we are changing individual records and the other question was doing a total table refresh.
DELETE, INSERT vs UPDATE || INSERT
Since you're talking SQL Server 2008, have you considered MERGE? It's a single statement that allows you to do an update or insert:
create table T1 (
ID int not null,
Val1 varchar(10) not null
)
go
insert into T1 (ID,Val1)
select 1,'abc'
go
merge into T1
using (select 1 as ID,'def' as Val1) upd on T1.ID = upd.ID --<-- These identify the row you want to update/insert and the new value you want to set. They could be #parameters
when matched then update set Val1 = upd.Val1
when not matched then insert (ID,Val1) values (upd.ID,upd.Val1);
What about INSERT ... ON DUPLICATE KEY? First doing a select to check if a record exists and checking in your program the result of that creates a race condition. That might not be important in your case if there is only a single instance of the program however.
INSERT INTO users (username, email) VALUES ('Jo', 'jo#email.com')
ON DUPLICATE KEY UPDATE email = 'jo#email.com'
You can use ##ROWCOUNT and perform UPDATE. If it was 0 rows affected - then perform INSERT after, nothing otherwise.
Your suggestion would mean always two instructions for each status change. The usual way is to do an UPDATE and then check if the operation changed any rows (Most databases have a variable like ROWCOUNT which should be greater than 0 if something changed). If it didn't, do an INSERT.
Search for UPSERT for find patterns for your specific DBMS
Personally, I think the UPDATE method is the best. Instead of doing a SELECT first to check if a record already exists, you can first attempt an UPDATE but if no rows are affected (using ##ROWCOUNT) you can do an INSERT.
The reason for this is that sooner or later you might want to track status changes, and the best way to do this would be to keep an audit trail of all changes using a trigger on the status table.
I have a Constraint on a table with IGNORE_DUP_KEY. This allows bulk inserts to partially work where some records are dupes and some are not (only inserting the non-dupes). However, it does not allow updates to partially work, where I only want those records updated where dupes will not be created.
Does anyone know how I can support IGNORE_DUP_KEY when applying updates?
I am using MS SQL 2005
If I understand correctly, you want to do UPDATEs without specifying the necessary WHERE logic to avoid creating duplicates?
create table #t (col1 int not null, col2 int not null, primary key (col1, col2))
insert into #t
select 1, 1 union all
select 1, 2 union all
select 2, 3
-- you want to do just this...
update #t set col2 = 1
-- ... but you really need to do this
update #t set col2 = 1
where not exists (
select * from #t t2
where #t.col1 = t2.col1 and col2 = 1
)
The main options that come to mind are:
Use a complete UPDATE statement to avoid creating duplicates
Use an INSTEAD OF UPDATE trigger to 'intercept' the UPDATE and only do UPDATEs that won't create a duplicate
Use a row-by-row processing technique such as cursors and wrap each UPDATE in TRY...CATCH... or whatever the language's equivalent is
I don't think anyone can tell you which one is best, because it depends on what you're trying to do and what environment you're working in. But because row-by-row processing could potentially produce some false positives, I would try to stick with a set-based approach.
I'm not sure what is really going on, but if you are inserting duplicates and updating Primary Keys as part of a bulk load process, then a staging table might be the solution for you. You create a table that you make sure is empty prior to the bulk load, then load it with the 100% raw data from the file, then process that data into your real tables (set based is best). You can do things like this to insert all rows that don't already exist:
INSERT INTO RealTable
(pk, col1, col2, col3)
SELECT
pk, col1, col2, col3
FROM StageTable s
WHERE NOT EXISTS (SELECT
1
FROM RealTable r
WHERE s.pk=r.pk
)
Prevent the duplicates in the first place is best. You could also do UPDATEs on your real table by joining in the staging table, etc. This will avoid the need to "work around" the constraints. When you work around the constraints, you usually create difficult to find bugs.
I have the feeling you should use the MERGE statement and then in the update part you should really not update the key you want to have unique. That also means that you have to define in your table that a key is unique (Setting a unique index or define as primary key). Then any update or insert with a duplicate key will fail.
Edit: I think this link will help on that:
http://msdn.microsoft.com/en-us/library/bb522522.aspx
Basically I have a two databases on SQL Server 2005.
I want to take the table data from one database and copy it to another database's table.
I tried this:
SELECT * INTO dbo.DB1.TempTable FROM dbo.DB2.TempTable
This didn't work.
I don't want to use a restore to avoid data loss...
Any ideas?
SELECT ... INTO creates a new table. You'll need to use INSERT. Also, you have the database and owner names reversed.
INSERT INTO DB1.dbo.TempTable
SELECT * FROM DB2.dbo.TempTable
SELECT * INTO requires that the destination table not exist.
Try this.
INSERT INTO db1.dbo.TempTable
(List of columns here)
SELECT (Same list of columns here)
FROM db2.dbo.TempTable
It's db1.dbo.TempTable and db2.dbo.TempTable
The four-part naming scheme goes:
ServerName.DatabaseName.Schema.Object
Hard to say without any idea what you mean by "it didn't work." There are a whole lot of things that can go wrong and any advice we give in troubleshooting one of those paths may lead you further and further from finding a solution, which may be really simple.
Here's a something I would look for though,
Identity Insert must be on on the table you are importing into if that table contains an identity field and you are manually supplying it. Identity Insert can also only be enabled for 1 table at a time in a database, so you must remember to enable it for the table, then disable it immediately after you are done importing.
Also, try listing out all your fields
INSERT INTO db1.user.MyTable (Col1, Col2, Col3)
SELECT Col1, COl2, Col3 FROM db2.user.MyTable
We can three part naming like database_name..object_name
The below query will create the table into our database(with out constraints)
SELECT *
INTO DestinationDB..MyDestinationTable
FROM SourceDB..MySourceTable
Alternatively you could:
INSERT INTO DestinationDB..MyDestinationTable
SELECT * FROM SourceDB..MySourceTable
If your destination table exists and is empty.
Don't forget to insert SET IDENTITY_INSERT MobileApplication1 ON to the top, else you will get an error. This is for SQL Server
SET IDENTITY_INSERT MOB.MobileApplication1 ON
INSERT INTO [SERVER1].DB.MOB.MobileApplication1 m
(m.MobileApplicationDetailId,
m.MobilePlatformId)
SELECT ma.MobileApplicationId,
ma.MobilePlatformId
FROM [SERVER2].DB.MOB.MobileApplication2 ma
Im prefer this one.
INSERT INTO 'DB_NAME'
(SELECT * from 'DB_NAME#DB_LINK')
MINUS
(SELECT * FROM 'DB_NAME');
Which means will insert whatsoever that not included on DB_NAME but included at DB_NAME#DB_LINK. Hope this help.
INSERT INTO DB1.dbo.TempTable
SELECT * FROM DB2.dbo.TempTable
If we use this query it will return Primary key error.... So better to choose which columns need to be moved, like
INSERT INTO db1.dbo.TempTable // (List of columns here)
SELECT (Same list of columns here)
FROM db2.dbo.TempTable
Try this
INSERT INTO dbo.DB1.TempTable
(COLUMNS)
SELECT COLUMNS_IN_SAME_ORDER FROM dbo.DB2.TempTable
This will only fail if an item in dbo.DB2.TempTable is in already in dbo.DB1.TempTable.
This works successfully.
INSERT INTO DestinationDB.dbo.DestinationTable (col1,col1)
SELECT Src-col1,Src-col2 FROM SourceDB.dbo.SourceTable
You can copy one table to other db table even with some additional columns.
insert into [SchoolDb1].[dbo].Student(Col1, Col2,Col3, CreationTime, IsDeleted)
select Col1, Col2,Col3,,getdate(),0 from [SchoolDb2].[dbo].Student
These are additional columns: (CreationTime is datatime and IsDeleted is boolean)
select * from DBA1.TABLENAMEA;
create table TABLENAMEA as (select * from DBA1.TABLENAMEA);
These manual way provides more flexibility, but at the same time, works for table whose size is smaller to few thousands.
Do select * from <table name> from DB, once whole table is displayed, scroll till it's bottom.
Right click and do Export table as Insert statement, provide the name of the destination table and export the table as .sql file.
Use any text editor to further do regular find and replace operation to include more column names etc.
Use the INSERT statement in destination DB.