In SQL 2008 I've this easy-but-bad-write sp that works:
ALTER PROCEDURE [dbo].[paActualizaCapacidadesDeZonas]
AS
BEGIN
SET NOCOUNT ON;
DECLARE #IdArticulo AS INT
DECLARE #ZonaAct AS INT
DECLARE #Suma AS INT
UPDATE CapacidadesZonas SET Ocupado=0
DECLARE csrSumas CURSOR FOR
SELECT AT.IdArticulo, T.NumZona, SUM(AT.Cantidad)
FROM ArticulosTickets AT
INNER JOIN Tickets T ON AT.IdTicket = T.IdTicket
GROUP BY AT.IdArticulo, T.NumZona
OPEN csrSumas
FETCH NEXT FROM csrSumas INTO #IdArticulo, #ZonaAct, #Suma
WHILE ##FETCH_STATUS = 0
BEGIN
UPDATE CapacidadesZonas SET Ocupado = #Suma
WHERE NumZona = #ZonaAct AND IdArticulo = #IdArticulo
FETCH NEXT FROM csrSumas INTO #IdArticulo, #ZonaAct, #Suma
END
CLOSE csrSumas
DEALLOCATE csrSumas
END
I know: I must avoid cursors, so I'm pretty sure that it can be done in a much proper way.
I've tried with a single Update query:
UPDATE CapacidadesZonas SET Ocupado =
(SELECT SUM(AT.Cantidad)
FROM ArticulosTickets AT
INNER JOIN Tickets T ON AT.IdTicket = T.IdTicket
GROUP BY AT.IdArticulo, T.NumZona)
But this is really wrong, because the select returns more than one row.
I'm feeling bad with this, because it is supposed must be easy for me, but I can't find the equivalent query.
Any suggestions?
Thanks in advance.
There are many different solutions to this problem-- see this article for a few options. Here's one way: use a derived table.
UPDATE CapacidadesZonas SET Ocupado=0 WHERE Ocupado <> 0;
UPDATE CapacidadesZonas
SET Ocupado = SUM(s.Cantidad)
FROM CapacidadesZonas C INNER JOIN
(
SELECT T.NumZona, AT.IdArticulo, SUM(AT.Cantidad) as Ocupado
FROM ArticulosTickets AT
INNER JOIN Tickets T ON AT.IdTicket = T.IdTicket
GROUP BY AT.IdArticulo, T.NumZona
) s ON s.NumZona = C.NumZona AND s.IdArticulo = C.IdArticulo;
Caveats:
are you expecting that the CapacidadesZonas table is available to a live application while the update is happening? If so you may have a locking or perf issue since SQL will may lock the whole table for the update of every row. If this is the case, consider doing your update in batches (e.g. of 1,000 rows each). UPDATE TOP makes batching easy.
sometimes SQL picks a suboptimal plan for queries like this. it may be faster to load a temp table (like in astander's solution above, but using a temp table instead of a table var) than to try to do the update as a single query. If you do this, remember to make sure there's an index on (IDArticulo, NumZona) on the the temp table before you do your update.
Try:
UPDATE cz
SET Ocupado = SUM(AT.Cantidad)
FROM CapacidadesZonas as cz
INNER JOIN ArticulosTickets AT ON cz.numZona = at.numZona and cz.IDArticulo = at.IDArticulo
INNER JOIN Tickets T ON AT.IdTicket = T.IdTicket
GROUP BY AT.IdArticulo, T.NumZona
Related
Hello I have to migrate data from one table to another and I wanna avoid using a cursor.
Using a cursor this would be very easy, since I'd have to do something like this:
DECLARE db_cursor CURSOR FOR
select Id, dataToMigrate
from OriginTable
where bar <> 'foo'
OPEN db_cursor
FETCH NEXT FROM db_cursor into #Id, #DataToMigrate
WHILE ##FETCH_STATUS = 0
BEGIN
UPDATE DestinationTable
SET Value = #DataToMigrate
Where Id = #Id
FETCH NEXT FROM db_cursor into #Id, #DataToMigrate
END
CLOSE db_cursor
DEALLOCATE db_cursor
However, this feels wrong. I'm sure there must be an easier and more clever way of doing this without a cursor.
Anyone knows a better way?
Yes a cursor is completely the wrong way to do this. Kudos for looking for a better way. You can do this with a simple update statement.
update d
set Value = o.DataToMigrate
from DestinationTable d
join OriginTable o on o.SomeColumn = d.SomeColumn
where o.bar <> 'foo'
A set based solution is the best choice in your case. You can use an UPDATE with a JOIN for this. You do need to know that this is merely updating your rows, it isn't inserting any data.
UPDATE D
SET D.Value = O.dataToMigrate
FROM DestinationTable D
INNER JOIN OriginTable O
ON D.Id = O.Id
WHERE O.bar <> 'foo'
This can be done with an UPDATE statement, as far as I can tell.
This is from memory, but it'd look like this:
UPDATE dt
SET Value = ot.dataToMigrate
FROM DestinationTable dt
JOIN OriginTable ot
ON dt.Id = ot.Id
WHERE bar <> 'foo'
The syntax has to be exactly right, but it's very possible to join two tables and update one from the other.
I have written a stored procedure.
Now I see, that this is very poor performance.
I think this is because of the while loop.
ALTER PROCEDURE [dbo].[DeleteEmptyCatalogNodes]
#CatalogId UNIQUEIDENTIFIER,
#CatalogNodeType int = null
AS
BEGIN
SET NOCOUNT ON;
DECLARE #CID UNIQUEIDENTIFIER
DECLARE #CNT int
SET #CID = #CatalogId
SET #CNT = #CatalogNodeType
DELETE cn FROM CatalogNodes cn
LEFT JOIN CatalogNodes as cnj on cn.CatalogNodeId = cnj.ParentId
LEFT JOIN CatalogArticles as ca on cn.CatalogNodeId = ca.CatalogNodeId
WHERE cn.CatalogId = #CID
AND cnj.CatalogNodeId IS NULL
AND ca.ArticleId IS NULL
AND (cn.CatalogNodeType = #CNT OR #CNT IS NULL)
WHILE (##ROWCOUNT > 0)
BEGIN
DELETE cn FROM CatalogNodes cn
LEFT JOIN CatalogNodes as cnj on cn.CatalogNodeId = cnj.ParentId
LEFT JOIN CatalogArticles as ca on cn.CatalogNodeId = ca.CatalogNodeId
WHERE cn.CatalogId = #CID
AND cnj.CatalogNodeId IS NULL
AND ca.ArticleId IS NULL
AND (cn.CatalogNodeType = #CNT OR #CNT IS NULL)
END
END
Do anyone of you can give me a hint on how to do it more 'set' like?
Thanks a lot!
EDIT for comments and answers:
The tables are build like this:
CatalogNodes:
CatalogNodeId|ParentId|Name
1|NULL|Root
2|1|Node1
3|1|Node2
4|2|Node1.1
CatalogArticles:
CatalogNodeId|Name
3|Article1
3|Article2
3|Article3
After my SP was called, Node1 and Node1.1 have to be deleted.
In the first delete statement, Node1.1 will be deleted.
In the While loop, Node1 will be deleted.
I hope my problem is now easier to understand, it is a tree structure.
You just do not need WHILE part as all matched rows will get deleted from the first DELETE statement
your loop doesn't do anything ... the first delete statement will delete a number of records if there are any that comply to your where condition ... so ##rowcount will be greater than 0 but there won't be any records left to be deleted in your second delete statement inside the loop. or did I miss something?
anyway I don't think this executing delete two times in a row has a big influence on the performance ... you should see it if you look at the query plan ...
One way to do this in my point is to create a table variable and put all elements that you have to delete there and use i with join to make delete in one single statement.
CatalogNodes is what you want to delete. Create a select that pulls out all the CatalogNodes you want to get rid of. If there are things tied by foreign key constraints go and delete them first and finally once they are all gotten rid of Delete the CatalogNodes. Temporary tables could be of benefit here as they are held in memory.
I want to delete all the rows from a SELECT without deleting the last returned row by using a trigger when a delete query is executed.
This trigger doesn't work so any help is greatly appreciated.
CREATE TRIGGER TR_StergereOfertaSpeciala
ON OferteSpeciale
INSTEAD OF DELETE
AS
DECLARE #nr INTEGER;
IF (EXISTS(SELECT * FROM DELETED))
BEGIN
SET #nr = (SELECT COUNT(*) FROM DELETED);
DELETE FROM (
SELECT TOP(#nr - 1)* FROM OferteSpeciale
INNER JOIN DELETED ON OferteSpeciale.codP = Deleted.codP
AND OferteSpeciale.codM = Deleted.codM
AND OferteSpeciale.dela = Deleted.dela)
END
Here is an example of getting your concept to work properly:
CREATE TRIGGER TR_StergereOfertaSpeciala
ON OferteSpeciale
INSTEAD OF DELETE
AS BEGIN
DECLARE #nr INT
SET #nr = (SELECT COUNT(*) FROM DELETED)
IF (#nr > 1) BEGIN
DELETE o
FROM OferteSpeciale AS o
INNER JOIN (SELECT TOP (#nr - 1) * FROM DELETED /* ORDER BY ??? */) AS d
ON o.codP = d.codP
AND o.codM = d.codM
AND o.dela = d.dela
END
END
Note the syntax for a delete with a join. Also note that we're arbitrarily choosing the 1 row to keep. I would suggest, as #RBarryYoung has mentioned, specifically ordering the set by something to know which row we are keeping.
Another way of doing this which could avoid the somewhat dynamic TOP clause (clever, BTW) would be to specifically exclude the record you want to keep using NOT EXISTS/IN
Also, you probably want to avoid trigger recursion and nested triggers in this case.
I am trying to update a row inside a cursor. What I am trying to do is update a chain of records with OLD_QTY and NEW_QTY. However when I try to do my update it gives the error The cursor is READ ONLY even though I included for update of OLD_QTY, NEW_QTY in my declration. It makes no difference if I include OLD_QTY and NEW_QTY in the select statement.
declare #current_inv_guid uniqueidentifier
declare #last_inv_guid uniqueidentifier
declare #current_vid int
declare #last_vid int
--declare #current_new_qty money
declare #last_new_qty money
--declare #current_old_qty money
declare iaCursor cursor
for select INV_GUID, old_VID
--, OLD_QTY, NEW_QTY
from #IA
order by INV_GUID, old_vid, ENTRY_NUM
for update --of OLD_QTY, NEW_QTY
open iaCursor
Fetch next from iaCursor into #current_inv_guid, #current_vid --, #current_old_qty, #current_new_qty
while ##fetch_status = 0
begin
--test to see if we hit a new chain.
if(#last_inv_guid <> #current_inv_guid or #current_vid <> #last_vid)
begin
set #last_new_QTY = (select #lots.QTY_RECEIVED from #lots where #lots.INV_GUID = #current_inv_guid and LOT_VID = #current_vid)
set #last_inv_guid = #current_inv_guid
set #last_vid = #current_vid
end
--update the current link in the chain
update #ia
set OLD_QTY = #last_new_QTY,
NEW_QTY = #last_new_QTY + QTY_CHANGE,
#last_new_QTY = #last_new_QTY + QTY_CHANGE
where current of iaCursor
--get the next link
fetch next from iaCursor into #current_inv_guid, #current_vid --, #current_old_qty, #current_new_qty
end
close iaCursor
deallocate iaCursor
Putting a order by in the select made the cursor read only.
You are not explicitly saying what behaviour you want, therefore, default rules apply, according to which, the cursor may or may not be updatable, depending on the underlying query.
It's perfectly fine to use order by in an updatable cursor, but you have to be more verbose and tell SQL Server what you want in details, for instance:
declare iaCursor cursor
local
forward_only
keyset
scroll_locks
for
select INV_GUID, old_VID
from #IA
order by INV_GUID, old_vid, ENTRY_NUM
for update of OLD_QTY, NEW_QTY
There's an import but subtle note on the documentation page that Patrick listed:
If the query references at least one table without a unique index, the
keyset cursor is converted to a static cursor.
And of course STATIC cursors are read-only.
Besides the reason you mentioned in your answer, what you're attmepting to do runs counter to the way SQL is meant to be used. Try to update the data in sets, not by rows.
I'm not positive, as I don't know your table design, but I believe the following should work. You may get better performance out of this. In particular, I'm assuming that QTY_CHANGE is coming from #ia, although this may not be the case.
UPDATE #ia as a set (OLD_QTY, NEW_QTY) = (SELECT #lots.QTY_RECEIVED + (COUNT(b.*) * a.QTY_CHANGE),
#lots.QTY_RECEIVED + ((COUNT(b.*) + 1) * a.QTY_CHANGE)
FROM #lots
LEFT JOIN #ia as b
ON b.INV_GUID = a.INV_GUID
AND b.OLD_VID = a.OLD_VID
AND b.ENTRY_NUM < a.ENTRY_NUM
WHERE #lots.INV_GUID = a.INV_GUID
AND #lots.LOT_VID = a.OLD_VID)
WHERE EXISTS (SELECT '1'
FROM #lots
WHERE #lots.INV_GUID = a.INV_GUID
AND #lots.LOT_VID = a.OLD_VID)
EDIT:
... the previous version of the answer was written with a DB2 perspective, although it would otherwise be db-agnostic. It also had the problem of using the same value of QTY_CHANGE for every row, which is unlikely. This should be a more idiomatic SQL Server 2008 version, as well as being more likely to output the correct answer:
WITH RT AS (SELECT #IA.inv_guid, #IA.old_vid, #IA.entry_num,
COALESCE(MAX(#Lots.qty_received), 0) +
SUM(#IA.qty_change) OVER(PARTITION BY #IA.inv_guid, #IA.old_vid
ORDER BY #IA.entry_num)
AS running_total
FROM #IA
LEFT JOIN #Lots
ON #Lots.inv_guid = #IA.inv_guid
AND #Lots.lot_vid = #IA.old_vid)
UPDATE #IA
SET #IA.old_qty = RT.running_total - #IA.qty_change, #IA.new_qty = RT.running_total
FROM #IA
JOIN RT
ON RT.inv_guid = #IA.inv_guid
AND RT.old_vid = #IA.old_vid
AND RT.entry_num = #IA.entry_num
Some cursor declarations do not allow updates. The documentation gives a hint in the following remark:
If the SELECT statement does not support updates (insufficient permissions, accessing remote tables that do not support updates, and
so on), the cursor is READ_ONLY.
I ran into the same issue when trying to join the "inserted" object of a trigger in the select statement of the cursor declaration.
Use the DYNAMIC clause, found in documentation.
Defines a cursor that reflects all data changes made to the rows in its result set as you scroll around the cursor. The data values, order, and membership of the rows can change on each fetch.
I have a stored procedure that creates and opens some cursors. It closes them at the end, but if it hits an error those cursors are left open! Then subsequent runs fail when it tries to create cursors since a cursor with the name already exists.
Is there a way I can query which cursors exists and if they are open or not so I can close and deallocate them? I feel like this is better than blindly trying to close and swallow errors.
This seems to work for me:
CREATE PROCEDURE dbo.p_cleanUpCursor #cursorName varchar(255) AS
BEGIN
DECLARE #cursorStatus int
SET #cursorStatus = (SELECT cursor_status('global',#cursorName))
DECLARE #sql varchar(255)
SET #sql = ''
IF #cursorStatus > 0
SET #sql = 'CLOSE '+#cursorName
IF #cursorStatus > -3
SET #sql = #sql+' DEALLOCATE '+#cursorName
IF #sql <> ''
exec(#sql)
END
Look here for info on how to find cursors. I have never used any of them because I could figure out a way to get it done without going Row By Agonizing Row.
You should rebuild the sp to either
not use cursors ( we can help -
there is almost always a way to
avoid RBAR)
build it in a transaction and roll it back if there is a failure or if you detect an error. Here are some excellent articles on this. part 1 and part 2
If you have SQL2005, you can also use try catch
EDIT (in response to your post):Ideally, data generation is best handled at the application level as they are better suited for non set based operations.
Red Gate has a SQL Data generator that I have used before (its great for single tables, but takes some configuring if you have lots of FK or a wide [normalized] database).
This works on 2008R2, haven't tested on anything earlier than that:
USE MASTER
GO
select s.session_id, s.host_name, s.program_name, s.client_interface_name, s.login_name
, c.cursor_id, c.properties, c.creation_time, c.is_open, con.text,
l.resource_type, d.name, l.request_type, l.request_Status, l.request_reference_count, l.request_lifetime, l.request_owner_type
from sys.dm_exec_cursors(0) c
left outer join (select * from sys.dm_exec_connections c cross apply sys.dm_exec_sql_text(c.most_recent_sql_handle) mr) con on c.session_id = con.session_id
left outer join sys.dm_exec_sessions s on s.session_id = c.session_id
left outer join sys.dm_tran_locks l on l.request_session_id = c.session_id
left outer join sys.databases d on d.database_id = l.resource_database_id
You can use the sp_cursor_list system stored procedure to get a list
of cursors visible to the current connection, and
sp_describe_cursor, sp_describe_cursor_columns, and
sp_describe_cursor_tables to determine the characteristics of a
cursor.
(from http://msdn.microsoft.com/it-it/library/aa172595(v=sql.80).aspx )
You can use
sys.dm_exec_cursors
as described here
Basically you can run this sample query and get information about the cursors that are open in various databases
sys.dm_exec_cursors(0)