How to convert a loop in SQL to Set-based logic - sql

I have spent a good portion of today and yesterday attempting to decide whether to utilize a loop or cursor in SQL or to figure out how to use set based logic to solve the problem. I am not new to set logic, but this problem seems to be particularly complex.
The Problem
The idea is that if I have a list of all transactions (10's, 100's of millions) and a date they occurred, I can start combining some of that data into a daily totals table so that it is more rapidly view able by reporting and analytic systems. The pseudocode for this is as such:
foreach( row in transactions_table )
if( row in totals_table already exists )
update totals_table, add my totals to the totals row
else
insert into totals_table with my row as the base values
delete ( or archive ) row
As you can tell, the block of the loop is relatively trivial to implement, and as is the cursor/looping iteration. However, the execution time is quite slow and unwieldy and my question is: is there a non-iterative way to perform such a task, or is this one of the rare exceptions where I just have to "suck it up" and use a cursor?
There have been a few discussions on the topic, some of which seem to be similar, but not usable due to the if/else statement and the operations on another table, for instance:
How to merge rows of SQL data on column-based logic? This question doesn't seem to be applicable because it simply returns a view of all sums, and doesn't actually make logical decisions about additions or updates to another table
SQL Looping seems to have a few ideas about selection with a couple of cases statements which seems possible, but there are two operations that I need done dependent upon the status of another table, so this solution does not seem to fit.
SQL Call Stored Procedure for each Row without using a cursor This solution seems to be the closest to what I need to do, in that it can handle arbitrary numbers of operations on each row, but there doesn't seem to be a consensus among that group.
Any advice how to tackle this frustrating problem?
Notes
I am using SQL Server 2008
The schema setup is as follows:
Totals: (id int pk, totals_date date, store_id int fk, machine_id int fk, total_in, total_out)
Transactions: (transaction_id int pk, transaction_date datetime, store_id int fk, machine_id int fk, transaction_type (IN or OUT), transaction_amount decimal)
The totals should be computed by store, by machine, and by date, and should total all of the IN transactions into total_in and the OUT transactions into total_out. The goal is to get a pseudo data cube going.

You would do this in two set-based statements:
BEGIN TRANSACTION;
DECLARE #keys TABLE(some_key INT);
UPDATE tot
SET totals += tx.amount
OUTPUT inserted.some_key -- key values updated
INTO #keys
FROM dbo.totals_table AS tot WITH (UPDLOCK, HOLDLOCK)
INNER JOIN
(
SELECT t.some_key, amount = SUM(amount)
FROM dbo.transactions_table AS t WITH (HOLDLOCK)
INNER JOIN dbo.totals_table AS tot
ON t.some_key = tot.some_key
GROUP BY t.some_key
) AS tx
ON tot.some_key = tx.some_key;
INSERT dbo.totals_table(some_key, amount)
OUTPUT inserted.some_key INTO #keys
SELECT some_key, SUM(amount)
FROM dbo.transactions_table AS tx
WHERE NOT EXISTS
(
SELECT 1 FROM dbo.totals_table
WHERE some_key = tx.some_key
)
GROUP BY some_key;
DELETE dbo.transactions_table
WHERE some_key IN (SELECT some_key FROM #keys);
COMMIT TRANSACTION;
(Error handling, applicable isolation level, rollback conditions etc. omitted for brevity.)
You do the update first so you don't insert new rows and then update them, performing work twice and possibly double counting. You could use output in both cases to a temp table, perhaps, to then archive/delete rows from the tx table.
I'd caution you to not get too excited about MERGE until they've resolved some of these bugs and you have read enough about it to be sure you're not lulled into any false confidence about how much "better" it is for concurrency and atomicity without additional hints. The race conditions you can work around; the bugs you can't.
Another alternative, from Nikola's comment
CREATE VIEW dbo.TotalsView
WITH SCHEMABINDING
AS
SELECT some_key_column(s), SUM(amount), COUNT_BIG(*)
FROM dbo.Transaction_Table
GROUP BY some_key_column(s);
GO
CREATE UNIQUE CLUSTERED INDEX some_key ON dbo.TotalsView(some_key_column(s));
GO
Now if you want to write queries that grab the totals, you can reference the view directly or - depending on query and edition - the view may automatically be matched even if you reference the base table.
Note: if you are not on Enterprise Edition, you may have to use the NOEXPAND hint to take advantage of the pre-aggregated values materialized by the view.

I do not think you need the loop.
You can just
Update all rows/sums that match your filters/ groups
Archive/ delete previous.
Insert all rows that do not match your filter/ groups
Archive/ delete previous.
SQL is supposed to use mass data not rows one by one.

Related

Questions about stored procedure to update row with total value of rows deleted

I want to delete rows from a Transactions table (which has a foreign key to my Customers table), and then update Customers.StartingBalance to reflect the sum of the deleted amounts.
So I have created a stored procedure. Here's what I have so far.
SET NOCOUNT ON;
DECLARE #CustomerBalances TABLE
(
CustomerId INT,
Amount BIGINT
);
-- Note: Caller has already begun a transaction
DELETE Transactions WITH (TABLOCK)
OUTPUT deleted.CustomerId, deleted.TotalAmount INTO #CustomerBalances
WHERE [TimeStamp] < #ArchiveDateTime;
IF EXISTS (SELECT 1 FROM #CustomerBalances)
BEGIN
UPDATE Customers WITH (TABLOCK)
SET StartingBalance = StartingBalance +
(SELECT SUM(Amount) FROM #CustomerBalances cb WHERE Id = cb.CustomerId)
END;
DELETE FROM #CustomerBalances
Since SQL is not my core competency, I'm trying to understand this query better. In particular, I have some questions about the UPDATE statement above.
This will update all Customers because I have no WHERE clause, right?
This correctly handles cases where a customer has more than one matching row in the #CustomerBalances table, right?
Is the EXISTS clause needed here?
Will SUM() return 0 or NULL if there are no matching rows?
Does everything get cleaned up if I don't have the final DELETE statement?
It is critical that no changes are made to the Transactions or Customers table while I'm doing this. Does my use of TABLOCK make sense here?
Any suggestions about the overall approach I'm taking?
This will update all Customers because I have no WHERE clause, right?
Yes. Consider adding a WHERE clause such as:
WHERE Id IN (SELECT DISTINCT CustomerId FROM #CustomerBalances)
This prevents updating balances that haven't changed.
This correctly handles cases where a customer has more than one matching row in the #CustomerBalances table, right?
Yes. Because you use SUM() to aggregate them.
Is the EXISTS clause needed here?
It's recommended rather than essential. It's good practice so that you only attempt to update balances when records have been archived.
Will SUM() return 0 or NULL if there are no matching rows?
Yes, this is a bug that will cause balances to be set to NULL (or error if NULL not allowed) for customers who had no transactions archived. This will be fixed by adding the WHERE clause noted above. If you're trying to avoid the WHERE for some reason you can fix it with COALESCE(SUM(Amount),0.00)
Does everything get cleaned up if I don't have the final DELETE statement?
Yes. When the procedure completes, the table variable will go out of scope automatically, so the DELETE isn't need, as far as this snippet shows.
It is critical that no changes are made to the Transactions or Customers table while I'm doing this. Does my use of TABLOCK make sense here?
Yes, but you should also specify HOLDLOCK to keep the lock until the transaction completes.
Any suggestions about the overall approach I'm taking?
See above, but in general it looks to be reasonable.

Alternatives to UPDATE statement Oracle 11g

I'm currently using Oracle 11g and let's say I have a table with the following columns (more or less)
Table1
ID varchar(64)
Status int(1)
Transaction_date date
tons of other columns
And this table has about 1 Billion rows. I would want to update the status column with a specific where clause, let's say
where transaction_date = somedatehere
What other alternatives can I use rather than just the normal UPDATE statement?
Currently what I'm trying to do is using CTAS or Insert into select to get the rows that I want to update and put on another table while using AS COLUMN_NAME so the values are already updated on the new/temporary table, which looks something like this:
INSERT INTO TABLE1_TEMPORARY (
ID,
STATUS,
TRANSACTION_DATE,
TONS_OF_OTHER_COLUMNS)
SELECT
ID
3 AS STATUS,
TRANSACTION_DATE,
TONS_OF_OTHER_COLUMNS
FROM TABLE1
WHERE
TRANSACTION_DATE = SOMEDATE
So far everything seems to work faster than the normal update statement. The problem now is I would want to get the remaining data from the original table which I do not need to update but I do need to be included on my updated table/list.
What I tried to do at first was use DELETE on the same original table using the same where clause so that in theory, everything that should be left on that table should be all the data that i do not need to update, leaving me now with the two tables:
TABLE1 --which now contains the rows that i did not need to update
TABLE1_TEMPORARY --which contains the data I updated
But the delete statement in itself is also too slow or as slow as the orginal UPDATE statement so without the delete statement brings me to this point.
TABLE1 --which contains BOTH the data that I want to update and do not want to update
TABLE1_TEMPORARY --which contains the data I updated
What other alternatives can I use in order to get the data that's the opposite of my WHERE clause (take note that the where clause in this example has been simplified so I'm not looking for an answer of NOT EXISTS/NOT IN/NOT EQUALS plus those clauses are slower too compared to positive clauses)
I have ruled out deletion by partition since the data I need to update and not update can exist in different partitions, as well as TRUNCATE since I'm not updating all of the data, just part of it.
Is there some kind of JOIN statement I use with my TABLE1 and TABLE1_TEMPORARY in order to filter out the data that does not need to be updated?
I would also like to achieve this using as less REDO/UNDO/LOGGING as possible.
Thanks in advance.
I'm assuming this is not a one-time operation, but you are trying to design for a repeatable procedure.
Partition/subpartition the table in a way so the rows touched are not totally spread over all partitions but confined to a few partitions.
Ensure your transactions wouldn't use these partitions for now.
Per each partition/subpartition you would normally UPDATE, perform CTAS of all the rows (I mean even the rows which stay the same go to TABLE1_TEMPORARY). Then EXCHANGE PARTITION and rebuild index partitions.
At the end rebuild global indexes.
If you don't have Oracle Enterprise Edition, you would need to either CTAS entire billion of rows (followed by ALTER TABLE RENAME instead of ALTER TABLE EXCHANGE PARTITION) or to prepare some kind of "poor man's partitioning" using a view (SELECT UNION ALL SELECT UNION ALL SELECT etc) and a bunch of tables.
There is some chance that this mess would actually be faster than UPDATE.
I'm not saying that this is elegant or optimal, I'm saying that this is the canonical way of speeding up large UPDATE operations in Oracle.
How about keeping in the UPDATE in the same table, but breaking it into multiple small chunks?
UPDATE .. WHERE transaction_date = somedatehere AND id BETWEEN 0000000 and 0999999
COMMIT
UPDATE .. WHERE transaction_date = somedatehere AND id BETWEEN 1000000 and 1999999
COMMIT
UPDATE .. WHERE transaction_date = somedatehere AND id BETWEEN 2000000 and 2999999
COMMIT
This could help if the total workload is potentially manageable, but doing it all in one chunk is the problem. This approach breaks it into modest-sized pieces.
Doing it this way could, for example, enable other apps to keep running & give other workloads a look in; and would avoid needing a single humungous transaction in the logfile.

SQL - renumbering a sequential column to be sequential again after deletion

I've researched and realize I have a unique situation.
First off, I am not allowed to post images yet to the board since I'm a new user, so see appropriate links below
I have multiple tables where a column (not always the identifier column) is sequentially numbered and shouldn't have any breaks in the numbering. My goal is to make sure this stays true.
Down and Dirty
We have an 'Event' table where we randomly select a percentage of the rows and insert the rows into table 'Results'. The "ID" column from the 'Results' is passed to a bunch of delete queries.
This more or less ensures that there are missing rows in several tables.
My problem:
Figuring out an sql query that will renumber the column I specify. I prefer to not drop the column.
Example delete query:
delete ItemVoid
from ItemTicket
join ItemVoid
on ItemTicket.item_ticket_id = itemvoid.item_ticket_id
where itemticket.ID in (select ID
from results)
Example Tables Before:
Example Tables After:
As you can see 2 rows were delete from both tables based on the ID column. So now I gotta figure out how to renumber the item_ticket_id and the item_void_id columns where the the higher number decreases to the missing value, and the next highest one decreases, etc. Problem #2, if the item_ticket_id changes in order to be sequential in ItemTickets, then
it has to update that change in ItemVoid's item_ticket_id.
I appreciate any advice you can give on this.
(answering an old question as it's the first search result when I was looking this up)
(MS T-SQL)
To resequence an ID column (not an Identity one) that has gaps,
can be performed using only a simple CTE with a row_number() to generate a new sequence.
The UPDATE works via the CTE 'virtual table' without any extra problems, actually updating the underlying original table.
Don't worry about the ID fields clashing during the update, if you wonder what happens when ID's are set that already exist, it
doesn't suffer that problem - the original sequence is changed to the new sequence in one go.
WITH NewSequence AS
(
SELECT
ID,
ROW_NUMBER() OVER (ORDER BY ID) as ID_New
FROM YourTable
)
UPDATE NewSequence SET ID = ID_New;
Since you are looking for advice on this, my advice is you need to redesign this as I see a big flaw in your design.
Instead of deleting the records and then going through the hassle of renumbering the remaining records, use a bit flag that will mark the records as Inactive. Then when you are querying the records, just include a WHERE clause to only include the records are that active:
SELECT *
FROM yourTable
WHERE Inactive = 0
Then you never have to worry about re-numbering the records. This also gives you the ability to go back and see the records that would have been deleted and you do not lose the history.
If you really want to delete the records and renumber them then you can perform this task the following way:
create a new table
Insert your original data into your new table using the new numbers
drop your old table
rename your new table with the corrected numbers
As you can see there would be a lot of steps involved in re-numbering the records. You are creating much more work this way when you could just perform an UPDATE of the bit flag.
You would change your DELETE query to something similar to this:
UPDATE ItemVoid
SET InActive = 1
FROM ItemVoid
JOIN ItemTicket
on ItemVoid.item_ticket_id = ItemTicket.item_ticket_id
WHERE ItemTicket.ID IN (select ID from results)
The bit flag is much easier and that would be the method that I would recommend.
The function that you are looking for is a window function. In standard SQL (SQL Server, MySQL), the function is row_number(). You use it as follows:
select row_number() over (partition by <col>)
from <table>
In order to use this in your case, you would delete the rows from the table, then use a with statement to recalculate the row numbers, and then assign them using an update. For transactional integrity, you might wrap the delete and update into a single transaction.
Oracle supports similar functionality, but the syntax is a bit different. Oracle calls these functions analytic functions and they support a richer set of operations on them.
I would strongly caution you from using cursors, since these have lousy performance. Of course, this will not work on an identity column, since such a column cannot be modified.

What's the most efficient query?

I have a table named Projects that has the following relationships:
has many Contributions
has many Payments
In my result set, I need the following aggregate values:
Number of unique contributors (DonorID on the Contribution table)
Total contributed (SUM of Amount on Contribution table)
Total paid (SUM of PaymentAmount on Payment table)
Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options:
Using subqueries:
SELECT Project.ID AS PROJECT_ID,
(SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack,
(SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount,
(SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived
FROM Project;
Using a temporary table:
DROP TABLE IF EXISTS Project_Temp;
CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY;
INSERT INTO Project_Temp (project_id,total_payments)
SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1;
INSERT INTO Project_Temp (project_id,total_donors,total_received)
SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1
ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received);
SELECT * FROM Project_Temp;
Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?
Knowing the timing for each 1K rows is good, but the real question is how they'll be used.
Are you planning to send all these back to a UI? Google doles out results 25 per page; maybe you should, too.
Are you planning to do calculations in the middle tier? Maybe you can do those calculations on the database and save yourself bringing all those bytes across the wire.
My point is that you may never need to work with 1,000 or one million rows if you think carefully about what you do with them.
You can EXPLAIN PLAN to see what the difference between the two queries is.
I would go with the first approach. You are allowing the RDBMS to do it's job, rather than trying to do it's job for it.
By creating a temp table, you will always create the full table for each query. If you only want data for one project, you still end up creating the full table (unless you restrict each INSERT statement accordingly.) Sure, you can code it, but it's already becoming a fair amount code and complexity for a small performance gain.
With a SELECT, the db can fetch the appriate amount of data, optimizing the whole query based on context. If other users have queried the same data, it may even be cached (query, and possibly data, depending upon your db). If performance is truly a concern, you might consider using Indexed/Materialized Views, or generating a table on an INSERT/UPDATE/DELETE trigger. Scaling out, you can use server clusters and partioned views - something that I believe will be difficult if you are creating temporary tables.
EDIT: the above is written without any specific rdbms in mind, although the OP added that mysql is the target db.
There is a third option which is derived tables:
Select Project.ID AS PROJECT_ID
, Payments.Total AS TotalPaidBack
, Coalesce(ContributionStats.DonarCount, 0) As ContributorCount
, ContributionStats.Total As TotalReceived
From Project
Left Join (
Select C1.RecipientId, Sum(C1.Amount) As Total, Count(Distinct C1.DonarId) ContributorCount
From Contribution As C1
Group By C1.RecipientId
) As ContributionStats
On ContributionStats.RecipientId = Project.Project_Id
Left Join (
Select P1.ProjectID, Sum(P1.PaymentAmount) As Total
From Payment As P1
Group By P1.RecipientId
) As Payments
On Payments.ProjectId = Project.Project_Id
I'm not sure if it will perform better, but you might give it shot.
A few thoughts:
The derived table idea would be good on other platforms, but MySQL has the same issue with derived tables that it does with views: they aren't indexed. That means that MySQL will execute the full content of the derived table before applying the WHERE clause, which doesn't scale at all.
Option 1 is good for being compact, but syntax might get tricky when you want to start putting the derived expressions in the WHERE clause.
The suggestion of materialized views is a good one, but MySQL unfortunately doesn't support them. I like the idea of using triggers. You could translate that temporary table into a real table that persists, and then use INSERT/UPDATE/DELETE triggers on the Payments and Contribution tables to update the Project Stats table.
Finally, if you don't want to mess with triggers, and if you aren't too concerned with freshness, you can always have the separate stats table and update it offline, having a cron job that runs every few minutes that does the work that you specified in Query #2 above, except on the real table. Depending on the nuances of your application, this slight delay in updating the stats may or may not be acceptable to your users.

SQL stored procedure temporary table memory problem

We have the following simple Stored Procedure that runs as an overnight SQL server agent job. Usually it runs in 20 minutes, but recently the MatchEvent and MatchResult tables have grown to over 9 million rows each. This has resulted in the store procedure taking over 2 hours to run, with all 8GB of memory on our SQL box being used up. This renders the database unavailable to the regular queries that are trying to access it.
I assume the problem is that temp table is too large and is causing the memory and database unavailablity issues.
How can I rewrite the stored procedure to make it more efficient and less memory intensive?
Note: I have edited the SQL to indicate that there is come condition affecting the initial SELECT statement. I had previously left this out for simplicity. Also, when the query runs CPU usage is at 1-2%, but memoery, as previously stated, is maxed out
CREATE TABLE #tempMatchResult
(
matchId VARCHAR(50)
)
INSERT INTO #tempMatchResult
SELECT MatchId FROM MatchResult WHERE SOME_CONDITION
DELETE FROM MatchEvent WHERE
MatchId IN (SELECT MatchId FROM #tempMatchResult)
DELETE FROM MatchResult WHERE
MatchId In (SELECT MatchId FROM #tempMatchResult)
DROP TABLE #tempMatchResult
There's probably a lot of stuff going on here, and it's not all your query.
First, I agree with the other posters. Try to rewrite this without a temp table if at all possible.
But assuming that you need a temp table here, you have a BIG problem in that you have no PK defined on it. It's vastly going to expand the amount of time your queries will take to run. Create your table like so instead:
CREATE TABLE #tempMatchResult (
matchId VARCHAR(50) NOT NULL PRIMARY KEY /* NOT NULL if at all possible */
);
INSERT INTO #tempMatchResult
SELECT DISTINCT MatchId FROM MatchResult;
Also, make sure that your TempDB is sized correctly. Your SQL server may very well be expanding the database file dynamically on you, causing your query to suck CPU and disk time. Also, make sure your transaction log is sized correctly, and that it is not auto-growing on you. Good luck.
Looking at the code above, why do you need a temp table?
DELETE FROM MatchEvent WHERE
MatchId IN (SELECT MatchId FROM MatchResult)
DELETE FROM MatchResult
-- OR Truncate can help here, if all the records are to be deleted anyways.
You probably want to process this piecewise in some way. (I assume queries are a lot more complicated that you showed?) In that case, you'd want try one of these:
Write your stored procedure to iterate over results. (Might still lock while processing.)
Repeatedly select the N first hits, eg LIMIT 100 and process those.
Divide work by scanning regions of the table separately, using something like WHERE M <= x AND x < N.
Run the "midnight job" more often. Seriously, running stuff like this every 5 mins instead can work wonders, especially if work increases non-linearly. (If not, you could still just get the work spread out over the hours of the day.)
In Postgres, I've had some success using conditional indices. They work magic by applying an index if certain conditions are met. This means that you can keep the many 'resolved' and the few unresolved rows in the same table, but still get that special index over just the unresolved ones. Ymmv.
Should be pointed out that this is where using databases gets interesting. You need to pay close attention to your indices and use EXPLAIN on your queries a lot.
(Oh, and remember, interesting is a good thing in your hobbies, but not at work.)
First, indexes are a MUST here see Dave M's answer.
Another approach that I will sometime use when deleting very large data sets, is creating a shadow table with all the data, recreating indexes and then using sp_rename to switch it in. You have to be careful with transactions here, but depending on the amount of data being deleted this can be faster.
Note If there is pressure on tempdb consider using joins and not copying all the data into the temp table.
So for example
CREATE TABLE #tempMatchResult (
matchId VARCHAR(50) NOT NULL PRIMARY KEY /* NOT NULL if at all possible */
);
INSERT INTO #tempMatchResult
SELECT DISTINCT MatchId FROM MatchResult;
set transaction isolation level serializable
begin transaction
create table MatchEventT(columns... here)
insert into MatchEventT
select * from MatchEvent m
left join #tempMatchResult t on t.MatchId = m.MatchId
where t.MatchId is null
-- create all the indexes for MatchEvent
drop table MatchEvent
exec sp_rename 'MatchEventT', 'MatchEvent'
-- similar code for MatchResult
commit transaction
DROP TABLE #tempMatchResult
Avoid the temp table if possible
It's only using up memory.
You could try this:
DELETE MatchEvent
FROM MatchEvent e ,
MatchResult r
WHERE e.MatchId = r.MatchId
If you can't avoid a temp table
I'm going to stick my neck out here and say: you don't need an index on your temporary table because you want the temp table to be the smallest table in the equation and you want to table scan it (because all the rows are relevant). An index won't help you here.
Do small bits of work
Work on a few rows at a time.
This will probably slow down the execution, but it should free up resources.
- One row at a time
SELECT #MatchId = min(MatchId) FROM MatchResult
WHILE #MatchId IS NOT NULL
BEGIN
DELETE MatchEvent
WHERE Match_Id = #MatchId
SELECT #MatchId = min(MatchId) FROM MatchResult WHERE MatchId > #MatchId
END
- A few rows at a time
CREATE TABLE #tmp ( MatchId Varchar(50) )
/* get list of lowest 1000 MatchIds: */
INSERT #tmp
SELECT TOP (1000) MatchId
FROM MatchResult
ORDER BY MatchId
SELECT #MatchId = min(MatchId) FROM MatchResult
WHILE #MatchId IS NOT NULL
BEGIN
DELETE MatchEvent
FROM MatchEvent e ,
#tmp t
WHERE e.MatchId = t.MatchId
/* get highest MatchId we've procesed: */
SELECT #MinMatchId = MAX( MatchId ) FROM #tmp
/* get next 1000 MatchIds: */
INSERT #tmp
SELECT TOP (1000) MatchId
FROM MatchResult
WHERE MatchId > #MinMatchId
ORDER BY MatchId
END
This one deletes up to 1000 rows at a time.
The more rows you delete at a time, the more resources you will use but the faster it will tend to run (until you run out of resources!). You can experiment to find a more optimal value than 1000.
DELETE FROM MatchResult WHERE
MatchId In (SELECT MatchId FROM #tempMatchResult)
can be replaced with
DELETE FROM MatchResult WHERE SOME_CONDITION
Can you just turn cascading deletes on between matchresult and matchevent? Then you need only worry about identifying one set of data to delete, and let SQL take care of the other.
The alternative would be to make use of the OUTPUT clause, but that's definitely more fiddle.
Both of these would let you delete from both tables, but only have to state (and execute) your filter predicate once. This may still not be as performant as a batching approach as suggested by other posters, but worth considering. YMMV