Related
I have the following table:
EventID=00002,DocumentID=0005,EventDesc=ItemsReceived
I have the quantity in another table
DocumentID=0005,Qty=20
I want to generate a result of 20 lines (depending on the quantity) with an auto generated column which will have a sequence of:
ITEM_TAG_001,
ITEM_TAG_002,
ITEM_TAG_003,
ITEM_TAG_004,
..
ITEM_TAG_020
Here's your sql query.
with cte as (
select 1 as ctr, t2.Qty, t1.EventID, t1.DocumentId, t1.EventDesc from tableA t1
inner join tableB t2 on t2.DocumentId = t1.DocumentId
union all
select ctr + 1, Qty, EventID, DocumentId, EventDesc from cte
where ctr <= Qty
)select *, concat('ITEM_TAG_', right('000'+ cast(ctr AS varchar(3)),3)) from cte
option (maxrecursion 0);
Output:
Best is to introduce a numbers table, very handsome in many places...
Something along:
Create some test data:
DECLARE #MockNumbers TABLE(Number BIGINT);
DECLARE #YourTable1 TABLE(DocumentID INT,ItemTag VARCHAR(100),SomeText VARCHAR(100));
DECLARE #YourTable2 TABLE(DocumentID INT, Qty INT);
INSERT INTO #MockNumbers SELECT TOP 100 ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) FROM master..spt_values;
INSERT INTO #YourTable1 VALUES(1,'FirstItem','qty 5'),(2,'SecondItem','qty 7');
INSERT INTO #YourTable2 VALUES(1,5), (2,7);
--The query
SELECT CONCAT(t1.ItemTag,'_',REPLACE(STR(A.Number,3),' ','0'))
FROM #YourTable1 t1
INNER JOIN #YourTable2 t2 ON t1.DocumentID=t2.DocumentID
CROSS APPLY(SELECT Number FROM #MockNumbers WHERE Number BETWEEN 1 AND t2.Qty) A;
The result
FirstItem_001
FirstItem_002
[...]
FirstItem_005
SecondItem_001
SecondItem_002
[...]
SecondItem_007
The idea in short:
We use an INNER JOIN to get the quantity joined to the item.
Now we use APPLY, which is a row-wise action, to bind as many rows to the set, as we need it.
The first item will return with 5 lines, the second with 7. And the trick with STR() and REPLACE() is one way to create a padded number. You might use FORMAT() (v2012+), but this is working rather slowly...
The table #MockNumbers is a declared table variable containing a list of numbers from 1 to 100. This answer provides an example how to create a pyhsical numbers and date table. Any database should have such a table...
If you don't want to create a numbers table, you can search for a tally table or tally on the fly. There are many answers showing approaches how to create a list of running numbers...a
I have an Access table of the form (I'm simplifying it a bit)
ID AutoNumber Primary Key
SchemeName Text (50)
SchemeNumber Text (15)
This contains some data eg...
ID SchemeName SchemeNumber
--------------------------------------------------------------------
714 Malcolm ABC123
80 Malcolm ABC123
96 Malcolms Scheme ABC123
101 Malcolms Scheme ABC123
98 Malcolms Scheme DEF888
654 Another Scheme BAR876
543 Whatever Scheme KJL111
etc...
Now. I want to remove duplicate names under the same SchemeNumber. But I want to leave the record which has the longest SchemeName for that scheme number. If there are duplicate records with the same longest length then I just want to leave only one, say, the lowest ID (but any one will do really). From the above example I would want to delete IDs 714, 80 and 101 (to leave only 96).
I thought this would be relatively easy to achieve but it's turning into a bit of a nightmare! Thanks for any suggestions. I know I could loop it programatically but I'd rather have a single DELETE query.
See if this query returns the rows you want to keep:
SELECT r.SchemeNumber, r.SchemeName, Min(r.ID) AS MinOfID
FROM
(SELECT
SchemeNumber,
SchemeName,
Len(SchemeName) AS name_length,
ID
FROM tblSchemes
) AS r
INNER JOIN
(SELECT
SchemeNumber,
Max(Len(SchemeName)) AS name_length
FROM tblSchemes
GROUP BY SchemeNumber
) AS w
ON
(r.SchemeNumber = w.SchemeNumber)
AND (r.name_length = w.name_length)
GROUP BY r.SchemeNumber, r.SchemeName
ORDER BY r.SchemeName;
If so, save it as qrySchemes2Keep. Then create a DELETE query to discard rows from tblSchemes whose ID value is not found in qrySchemes2Keep.
DELETE
FROM tblSchemes AS s
WHERE Not Exists (SELECT * FROM qrySchemes2Keep WHERE MinOfID = s.ID);
Just beware, if you later use Access' query designer to make changes to that DELETE query, it may "helpfully" convert the SQL to something like this:
DELETE s.*, Exists (SELECT * FROM qrySchemes2Keep WHERE MinOfID = s.ID)
FROM tblSchemes AS s
WHERE (((Exists (SELECT * FROM qrySchemes2Keep WHERE MinOfID = s.ID))=False));
DELETE FROM Table t1
WHERE EXISTS (SELECT 1 from Table t2
WHERE t1.SchemeNumber = t2.SchemeNumber
AND Length(t2.SchemeName) > Length(t1.SchemeName)
)
Depend on your RDBMS you may use function different from Length (Oracle - length, mysql - length, sql server - LEN)
delete ShortScheme
from Scheme ShortScheme
join Scheme LongScheme
on ShortScheme.SchemeNumber = LongScheme.SchemeNumber
and (len(ShortScheme.SchemeName) < len(LongScheme.SchemeName) or (len(ShortScheme.SchemeName) = len(LongScheme.SchemeName) and ShortScheme.ID > LongScheme.ID))
(SQL Server flavored)
Now updated to include the specified tie resolution. Although, you may get better performance doing it in two queries: first deleting the schemes with shorter names as in my original query and then going back and deleting the higher ID where there was a tie in name length.
I'd do this in multiple steps. Large delete operations done in a single step make me too nervous -- what if you make a mistake? There's no sql 'undo' statement.
-- Setup the data
DROP Table foo;
DROP Table bar;
DROP Table bat;
DROP Table baz;
CREATE TABLE foo (
id int(11) NOT NULL,
SchemeName varchar(50),
SchemeNumber varchar(15),
PRIMARY KEY (id)
);
insert into foo values (714, 'Malcolm', 'ABC123' );
insert into foo values (80, 'Malcolm', 'ABC123' );
insert into foo values (96, 'Malcolms Scheme', 'ABC123' );
insert into foo values (101, 'Malcolms Scheme', 'ABC123' );
insert into foo values (98, 'Malcolms Scheme', 'DEF888' );
insert into foo values (654, 'Another Scheme ', 'BAR876' );
insert into foo values (543, 'Whatever Scheme ', 'KJL111' );
-- Find all the records that have dups, find the longest one
create table bar as
select max(length(SchemeName)) as max_length, SchemeNumber
from foo
group by SchemeNumber
having count(*) > 1;
-- Find the one we want to keep
create table bat as
select min(a.id) as id, a.SchemeNumber
from foo a join bar b on a.SchemeNumber = b.SchemeNumber
and length(a.SchemeName) = b.max_length
group by SchemeNumber;
-- Select into this table all the rows to delete
create table baz as
select a.id from foo a join bat b where a.SchemeNumber = b.SchemeNumber
and a.id != b.id;
This will give you a new table with only records for rows that you want to remove.
Now check these out and make sure that they contain only the rows you want deleted. This way you can make sure that when you do the delete, you know exactly what to expect. It should also be pretty fast.
Then when you're ready, use this command to delete the rows using this command.
delete from foo where id in (select id from baz);
This seems like more work because of the different tables, but it's safer probably just as fast as the other ways. Plus you can stop at any step and make sure the data is what you want before you do any actual deletes.
If your platform supports ranking functions and common table expressions:
with cte as (
select row_number()
over (partition by SchemeNumber order by len(SchemeName) desc) as rn
from Table)
delete from cte where rn > 1;
try this:
Select * From Table t
Where Len(SchemeName) <
(Select Max(Len(Schemename))
From Table
Where SchemeNumber = t.SchemeNumber )
And Id >
(Select Min (Id)
From Table
Where SchemeNumber = t.SchemeNumber
And SchemeName = t.SchemeName)
or this:,...
Select * From Table t
Where Id >
(Select Min(Id) From Table
Where SchemeNumber = t.SchemeNumber
And Len(SchemeName) <
(Select Max(Len(Schemename))
From Table
Where SchemeNumber = t.SchemeNumber))
if either of these selects the records that should be deleted, just change it to a delete
Delete
From Table t
Where Len(SchemeName) <
(Select Max(Len(Schemename))
From Table
Where SchemeNumber = t.SchemeNumber )
And Id >
(Select Min (Id)
From Table
Where SchemeNumber = t.SchemeNumber
And SchemeName = t.SchemeName)
or using the second construction:
Delete From Table t Where Id >
(Select Min(Id) From Table
Where SchemeNumber = t.SchemeNumber
And Len(SchemeName) <
(Select Max(Len(Schemename))
From Table
Where SchemeNumber = t.SchemeNumber))
I need to test my mail server. How can I make a Select statement
that selects say ID=5469 a thousand times.
If I get your meaning then a very simple way is to cross join on a derived query on a table with more than 1000 rows in it and put a top 1000 on that. This would duplicate your results 1000 times.
EDIT: As an example (This is MSSQL, I don't know if Access is much different)
SELECT
MyTable.*
FROM
MyTable
CROSS JOIN
(
SELECT TOP 1000
*
FROM
sysobjects
) [BigTable]
WHERE
MyTable.ID = 1234
You can use the UNION ALL statement.
Try something like:
SELECT * FROM tablename WHERE ID = 5469
UNION ALL
SELECT * FROM tablename WHERE ID = 5469
You'd have to repeat the SELECT statement a bunch of times but you could write a bit of VB code in Access to create a dynamic SQL statement and then execute it. Not pretty but it should work.
Create a helper table for this purpose:
JUST_NUMBER(NUM INT primary key)
Insert (with the help of some (VB) script) numbers from 1 to N. Then execute this unjoined query:
SELECT MYTABLE.*
FROM MYTABLE,
JUST_NUMBER
WHERE MYTABLE.ID = 5469
AND JUST_NUMBER.NUM <= 1000
Here's a way of using a recursive common table expression to generate some empty rows, then to cross join them back onto your desired row:
declare #myData table (val int) ;
insert #myData values (666),(888),(777) --some dummy data
;with cte as
(
select 100 as a
union all
select a-1 from cte where a>0
--generate 100 rows, the max recursion depth
)
,someRows as
(
select top 1000 0 a from cte,cte x1,cte x2
--xjoin the hundred rows a few times
--to generate 1030301 rows, then select top n rows
)
select m.* from #myData m,someRows where m.val=666
substitute #myData for your real table, and alter the final predicate to suit.
easy way...
This exists only one row into the DB
sku = 52 , description = Skullcandy Inkd Green ,price = 50,00
Try to relate another table in which has no constraint key to the main table
Original Query
SELECT Prod_SKU , Prod_Descr , Prod_Price FROM dbo.TB_Prod WHERE Prod_SKU = N'52'
The Functional Query ...adding a not related table called 'dbo.TB_Labels'
SELECT TOP ('times') Prod_SKU , Prod_Descr , Prod_Price FROM dbo.TB_Prod,dbo.TB_Labels WHERE Prod_SKU = N'52'
In postgres there is a nice function called generate_series. So in postgreSQL it is as simple as:
select information from test_table, generate_series(1, 1000) where id = 5469
In this way, the query is executed 1000 times.
Example for postgreSQL:
CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; --To be able to use function uuid_generate_v4()
--Create a test table
create table test_table (
id serial not null,
uid UUID NOT NULL,
CONSTRAINT uid_pk PRIMARY KEY(id));
-- Insert 10000 rows
insert into test_table (uid)
select uuid_generate_v4() from generate_series(1, 10000);
-- Read the data from id=5469 one thousand times
select id, uid, uuid_generate_v4() from test_table, generate_series(1, 1000) where id = 5469;
As you can see in the result below, the data from uid is read 1000 times as confirmed by the generation of a new uuid at every new row.
id |uid |uuid_generate_v4
----------------------------------------------------------------------------------------
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"5630cd0d-ee47-4d92-9ee3-b373ec04756f"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"ed44b9cb-c57f-4a5b-ac9a-55bd57459c02"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"3428b3e3-3bb2-4e41-b2ca-baa3243024d9"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"7c8faf33-b30c-4bfa-96c8-1313a4f6ce7c"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"b589fd8a-fec2-4971-95e1-283a31443d73"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"8b9ab121-caa4-4015-83f5-0c2911a58640"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"7ef63128-b17c-4188-8056-c99035e16c11"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"5bdc7425-e14c-4c85-a25e-d99b27ae8b9f"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"9bbd260b-8b83-4fa5-9104-6fc3495f68f3"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"c1f759e1-c673-41ef-b009-51fed587353c"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"4a70bf2b-ddf5-4c42-9789-5e48e2aec441"
Of course other DBs won't necessarily have the same function but it could be done:
See here.
If your are doing this in sql Server
declare #cnt int
set #cnt = 0
while #cnt < 1000
begin
select '12345'
set #cnt = #cnt + 1
end
select '12345' can be any expression
Repeat rows based on column value of TestTable. First run the Create table and insert statement, then run the following query for the desired result.
This may be another solution:
CREATE TABLE TestTable
(
ID INT IDENTITY(1,1),
Col1 varchar(10),
Repeats INT
)
INSERT INTO TESTTABLE
VALUES ('A',2), ('B',4),('C',1),('D',0)
WITH x AS
(
SELECT TOP (SELECT MAX(Repeats)+1 FROM TestTable) rn = ROW_NUMBER()
OVER (ORDER BY [object_id])
FROM sys.all_columns
ORDER BY [object_id]
)
SELECT * FROM x
CROSS JOIN TestTable AS d
WHERE x.rn <= d.Repeats
ORDER BY Col1;
This trick helped me in my requirement.
here, PRODUCTDETAILS is my Datatable
and orderid is my column.
declare #Req_Rows int = 12
;WITH cte AS
(
SELECT 1 AS Number
UNION ALL
SELECT Number + 1 FROM cte WHERE Number < #Req_Rows
)
SELECT PRODUCTDETAILS.*
FROM cte, PRODUCTDETAILS
WHERE PRODUCTDETAILS.orderid = 3
create table #tmp1 (id int, fld varchar(max))
insert into #tmp1 (id, fld)
values (1,'hello!'),(2,'world'),(3,'nice day!')
select * from #tmp1
go
select * from #tmp1 where id=3
go 1000
drop table #tmp1
in sql server try:
print 'wow'
go 5
output:
Beginning execution loop
wow
wow
wow
wow
wow
Batch execution completed 5 times.
The easy way is to create a table with 1000 rows. Let's call it BigTable. Then you would query for the data you want and join it with the big table, like this:
SELECT MyTable.*
FROM MyTable, BigTable
WHERE MyTable.ID = 5469
My stored procedure looks like:
WITH MYCTE(....)
AS
(
...
)
UPDATE ... (using my CTE)
DELETE ( using my CTE) <--- says the object, my CTE, doesn't exist
Can I only use it once?
In your example code, the CTE only persists for the UPDATE. If you need it to last longer, consider populating a #tempTable or #tableVariable with it, and then UPDATE and DELETE from those.
You may also augment your UPDATE to use an OUTPUT clause, like the following, so you can capture the affected rows. And use them in the DELETE, like here:
set nocount on
DECLARE #Table table (PK int, col1 varchar(5))
DECLARE #SavedPks table (PK int)
INSERT INTO #Table VALUES (1,'g')
INSERT INTO #Table VALUES (2,'g')
INSERT INTO #Table VALUES (3,'g')
INSERT INTO #Table VALUES (4,'g')
INSERT INTO #Table VALUES (5,'x')
INSERT INTO #Table VALUES (6,'x')
set nocount off
;WITH MYCTE
AS
(
SELECT PK, col1 FROM #Table
)
UPDATE MYCTE
SET col1='xyz'
OUTPUT INSERTED.PK
INTO #SavedPks
WHERE col1='g'
SELECT 'A',* FROM #Table
DELETE #Table
WHERE PK IN (SELECT PK from #SavedPks)
SELECT 'B',* FROM #Table
OUTPUT:
(4 row(s) affected)
PK col1
---- ----------- -----
A 1 xyz
A 2 xyz
A 3 xyz
A 4 xyz
A 5 x
A 6 x
(6 row(s) affected)
(4 row(s) affected)
PK col1
---- ----------- -----
B 5 x
B 6 x
(2 row(s) affected)
A CTE expression is only valid in its body. If you want to use it in other places, you should repeat the WITH clause too.
WITH MYCTE(....) AS ( ... )
UPDATE ... (using my CTE);
-- a semicolon is necessary for statements followed by a CTE declaration
WITH MYCTE(....) AS ( ... )
DELETE ( using my CTE);
CTE don't create anything 'real'. They are merely a language element, a way to express a table expression that will be used, possible repeatedly, in a statement. When you say
WITH cteFoo AS (select ... from table where ...)
select ... from cteFoo where ...
is just another way of saying
select ... from (select ... from table where ....) as cteFoo where ...
CTE and derived tables are very similar, any query using derived tables can be rewriten as a CTE, and any non-recursive CTE can be rewritten as a query using derived tables. Personally, I much more preffer the CTE form as is more concise and easy to read.
CTEs allow for a table expression used multiple times to be declare only once:
WITH cte AS (select ... from table where ...)
select ...
from cte a join cte b on ...
where ...
Compare this with the semantically similar derived table form:
select ...
from (
select ... from table where ...) as a
join (
select ... from table where ...) as b
on ...
where ...
The CTE is clearly more readable. But you must understand that the two forms are producing the same query. The CTE form might suggest that an intermediate result is created then the join is run on the intermediate result, but this is not true. The CTE form is compiled into exactly the same form as the derived table one, which makes clear the fact that the CTE's table expresion is run twice.
Yep, the WITH MYCTE clause is not creating a permanent object to use in multiple queries afterwards: it's only modifying the one query you're adding that clause to! If you need very different functonality, consider, instead, using views...
I was wondering if it is possible to move all rows of data from one table to another, that match a certain query?
For example, I need to move all table rows from Table1 to Table2 where their username = 'X' and password = 'X', so that they will no longer appear in Table1.
I'm using SQL Server 2008 Management Studio.
Should be possible using two statements within one transaction, an insert and a delete:
BEGIN TRANSACTION;
INSERT INTO Table2 (<columns>)
SELECT <columns>
FROM Table1
WHERE <condition>;
DELETE FROM Table1
WHERE <condition>;
COMMIT;
This is the simplest form. If you have to worry about new matching records being inserted into table1 between the two statements, you can add an and exists <in table2>.
This is an ancient post, sorry, but I only came across it now and I wanted to give my solution to whoever might stumble upon this one day.
As some have mentioned, performing an INSERT and then a DELETE might lead to integrity issues, so perhaps a way to get around it, and to perform everything neatly in a single statement, is to take advantage of the [deleted] temporary table.
DELETE FROM [source]
OUTPUT [deleted].<column_list>
INTO [destination] (<column_list>)
All these answers run the same query for the INSERT and DELETE. As mentioned previously, this risks the DELETE picking up records inserted between statements and could be slow if the query is complex (although clever engines "should" make the second call fast).
The correct way (assuming the INSERT is into a fresh table) is to do the DELETE against table1 using the key field of table2.
The delete should be:
DELETE FROM tbl_OldTableName WHERE id in (SELECT id FROM tbl_NewTableName)
Excuse my syntax, I'm jumping between engines but you get the idea.
A cleaner representation of what some other answers have hinted at:
DELETE sourceTable
OUTPUT DELETED.*
INTO destTable (Comma, separated, list, of, columns)
WHERE <conditions (if any)>
Yes it is. First INSERT + SELECT and then DELETE orginals.
INSERT INTO Table2 (UserName,Password)
SELECT UserName,Password FROM Table1 WHERE UserName='X' AND Password='X'
then delete orginals
DELETE FROM Table1 WHERE UserName='X' AND Password='X'
you may want to preserve UserID or someother primary key, then you can use IDENTITY INSERT to preserve the key.
see more on SET IDENTITY_INSERT on MSDN
You should be able to with a subquery in the INSERT statement.
INSERT INTO table1(column1, column2) SELECT column1, column2 FROM table2 WHERE ...;
followed by deleting from table1.
Remember to run it as a single transaction so that if anything goes wrong you can roll the entire operation back.
Use this single sql statement which is safe no need of commit/rollback with multiple statements.
INSERT Table2 (
username,password
) SELECT username,password
FROM (
DELETE Table1
OUTPUT
DELETED.username,
DELETED.password
WHERE username = 'X' and password = 'X'
) AS RowsToMove ;
Works on SQL server make appropriate changes for MySql
Try this
INSERT INTO TABLE2 (Cols...) SELECT Cols... FROM TABLE1 WHERE Criteria
Then
DELETE FROM TABLE1 WHERE Criteria
You could try this:
SELECT * INTO tbl_NewTableName
FROM tbl_OldTableName
WHERE Condition1=#Condition1Value
Then run a simple delete:
DELETE FROM tbl_OldTableName
WHERE Condition1=#Condition1Value
You may use "Logical Partitioning" to switch data between tables:
By updating the Partition Column, data will be automatically moved to the other table:
here is the sample:
CREATE TABLE TBL_Part1
(id INT NOT NULL,
val VARCHAR(10) NULL,
PartitionColumn VARCHAR(10) CONSTRAINT CK_Part1 CHECK(PartitionColumn = 'TBL_Part1'),
CONSTRAINT TBL_Part1_PK PRIMARY KEY(PartitionColumn, id)
);
CREATE TABLE TBL_Part2
(id INT NOT NULL,
val VARCHAR(10) NULL,
PartitionColumn VARCHAR(10) CONSTRAINT CK_Part2 CHECK(PartitionColumn = 'TBL_Part2'),
CONSTRAINT TBL_Part2_PK PRIMARY KEY(PartitionColumn, id)
);
GO
CREATE VIEW TBL(id, val, PartitionColumn)
WITH SCHEMABINDING
AS
SELECT id, val, PartitionColumn FROM dbo.TBL_Part1
UNION ALL
SELECT id, val, PartitionColumn FROM dbo.TBL_Part2;
GO
--Insert sample to TBL ( will be inserted to Part1 )
INSERT INTO TBL
VALUES(1, 'rec1', 'TBL_Part1');
INSERT INTO TBL
VALUES(2, 'rec2', 'TBL_Part1');
GO
--Query sub table to verify
SELECT * FROM TBL_Part1
GO
--move the data to table TBL_Part2 by Logical Partition switching technique
UPDATE TBL
SET
PartitionColumn = 'TBL_Part2';
GO
--Query sub table to verify
SELECT * FROM TBL_Part2
Here is how do it with single statement
WITH deleted_rows AS (
DELETE FROM source_table WHERE id = 1
RETURNING *
)
INSERT INTO destination_table
SELECT * FROM deleted_rows;
EXAMPLE:
postgres=# select * from test1 ;
id | name
----+--------
1 | yogesh
2 | Raunak
3 | Varun
(3 rows)
postgres=# select * from test2;
id | name
----+------
(0 rows)
postgres=# WITH deleted_rows AS (
postgres(# DELETE FROM test1 WHERE id = 1
postgres(# RETURNING *
postgres(# )
postgres-# INSERT INTO test2
postgres-# SELECT * FROM deleted_rows;
INSERT 0 1
postgres=# select * from test2;
id | name
----+--------
1 | yogesh
(1 row)
postgres=# select * from test1;
id | name
----+--------
2 | Raunak
3 | Varun
If the two tables use the same ID or have a common UNIQUE key:
1) Insert the selected record in table 2
INSERT INTO table2 SELECT * FROM table1 WHERE (conditions)
2) delete the selected record from table1 if presents in table2
DELETE FROM table1 as A, table2 as B WHERE (A.conditions) AND (A.ID = B.ID)
It will create a table and copy all the data from old table to new table
SELECT * INTO event_log_temp FROM event_log
And you can clear the old table data.
DELETE FROM event_log
For some scenarios, it might be the easiest to script out Table1, rename the existing Table1 to Table2 and run the script to recreate Table1.