I am trying to update a row on an SQL SERVER 2005. When I run the SQL, I receive a message indicating that the Execution was successful and 1 row was affected. However, when I do a select against this row I supposedly updated, the value remains unchanged. What's going on with this SQL server when a successful query does absolutely nothing.
The query is:
UPDATE [database1].[dbo].[table1]
SET [order] = 215
WHERE [email] = 'email#email.com'
check for a trigger on [database1].[dbo].[table1], possibly it is doing something you are not aware of.
EDIT
without seeing the trigger code, you probably just need to add support for [order] into the trigger, since it is a new column (based on your comment).
Thanks KM I checked the triggers and you were right. There was a trigger that I had to disable to get the sql to work.
Related
I am executing following UPDATE sql query against oracle database server -
UPDATE TEST.SS_USER_CREDENTIALS SET CREDENTIAL = 'UUHs4w4Nk45gHrSIHA==';
After executing this query in Oracle SQL developer, I can see spinner of execution status of query keeps spinning forever and hence no output is returned. However following SELECT query on same table returns result immediately -
SELECT * FROM TEST.SS_USER_CREDENTIALS;
Could you please help to understand why UPDATE query is not executed?
If you don't have autocommit enabled, you may need to run
COMMIT;
Otherwise oracle updates aren't actually applied to your data set
try with where clause
UPDATE TEST.SS_USER_CREDENTIALS SET CREDENTIAL = 'UUHs4w4Nk45gHrSIHA==' where id='someid';
as I see your command if the table has trillion of rows it will take time (as I guess this should not be foreign key)
UPDATE table_name
SET column1 = value1, column2 = value2, ...
WHERE condition;
If you are looking for a temp solution you can also change the ON UPDATE action to CASCADE and modify your ids if you got a foreign key problem
it can be you have another open uncommitted transaction for the same set of records, so they are locked for that transaction.
if you locked them, running the same UPDATE in another transaction.
Just Commit/rollback your transactions
This works for me:
Close the database connection and refresh it
Refresh the table
Restart the SQL Server in services
I know this is not a complete answer to this, hope it helped!
I have a sql statement that first updates, then selects:
UPDATE myTable
SET field1=#someValue
WHERE field2=#someValue2
SELECT 1 returnValue
The process that consumes the reults of this statement is expecting a single result set, simple enough.
The problem arises because an update trigger was added to the table that produces a result set, i.e. it selects like so:
SELECT t_field1, t_field2, t_field3 FROM t_table
The obvious solution is to split up the statments. Unfortunatley, the real world implementation of this is complex and to be avoided if possible. The trigger is also nessecary and cannot be disabled.
Is there a way to supress the results from the update, returning only the value from the select statement?
The ability to return result sets from triggers is deprecated in SQL Server 2012 and will be removed in a future version (maybe even in SQL Server 2016, but probably in the next version). Change your trigger to return the data in some other way. If it is needed just for debugging, use PRINT instead of SELECT. If it is needed for some other reasons, insert the data into a temporary table and perform the SELECT from the calling procedure (only when needed).
I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END
Is there any way to see exact results of an executed update statement in SQL Developer instead of only number of rows updated? Of course before commiting the statement. I'd like to see what changes were made to rows affected by statement and which rows were affected, but I couldn't find a way to do it.
I dont think theres a way to get exactly what you want, ie, to see the exact results of an update statement.
Its almost always a good idea to run a select query with the same conditions in your WHERE clause of your update or delete statements to see the records that would be affected before running any non-trivial update or delete statements.
You could also use the SQL History tab (View (in the menu bar) -> SQL History or press F8) to see all the SQL thats been executed in the past. This works on Oracle SQL Developer version 3.1.xx
I have this code in a trigger.
if isnull(#d_email,'') <> isnull(#i_email,'')
begin
update server2.database2.dbo.Table2
set
email = #i_email,
where user_id = (select user_id from server2.database2.dbo.Table1 where login = #login)
end
I would like to update a table on another db server, both are MSSQL. the query above works for me but it is taking over 10 seconds to complete. table2 has over 200k records. When I run the execution plan it says that the remote scan has a 99% cost.
Any help would be appreciated.
First, the obvious. Check the indexes on the linked server. If I saw this problem without the linked server issue, that would be the first thing I would check.
Suggestion:
Instead of embedding the UPDATE in the server 1 trigger, create a stored procedure on the linked server and update the records by calling the stored procedure.
Try to remove the sub-query from the UPDATE:
if isnull(#d_email,'') <> isnull(#i_email,'')
begin
update server2.database2.dbo.Table2
set email = #i_email
from server2.database2.dbo.Table2 t2
inner join
server2.database2.dbo.Table1 t1
on (t1.user_id = t2.user_id)
where t1.login = #login
end
Whoa, bad trigger! Never and I mean never, never write a trigger assuming only one record will be inserted/updated or deleted. You SHOULD NOT use variables this way in a trigger. Triggers operate on batches of data, if you assume one record, you will create integrity problems with your database.
What you need to do is join to the inserted table rather than using a varaible for the value.
Also really updating to a remote server may not be such a dandy idea in a trigger. If the remote server goes down then you can't insert anything to the orginal table. If the data can be somewaht less than real time, the normal technique is to have the trigger go to a table on the same server and then a job pick up the new info every 5-10 minutes. That way if the remote server is down, the records can still be inserted and they are stored until the job can pick them up and send them to the remote server.