I was irresponsibly typing SQL on a production ambient using pgadmin3 tool, suddenly I mistyped and execute a UPDATE sentence on a biggerImportantTable without WHERE. Then desperately I use cancel query button on pgadmin3. After I look at a small sample of the rows in the table and it seems good. But I'm not sure of the database integrity. What is the state of my boss database?.
The log of pgadmin3:
-- Executing query:
UPDATE schema.big_important_table SET important_field = NULL;
********** Error **********
ERROR: canceling statement due to user request
SQL state: 57014
If you cancel a DML statement (even with auto-commit enabled as pgAdmin does) the whole statement is rolled back.
So everything should be OK, nothing was changed.
Related
database : vertica
when I use insert into.....select.... statement to add data, following error occurred:
notice : error encounterred in contriant validation
error : ddl statement interferred with query plan
hint : please reissue query
This error, although it is not the actual error, is pretty self explanatory. You tried to issue an INSERT statement, but there was another DDL statement that interfered (was holding a lock on the table) at the same time and so your INSERT statement was killed.
At this point you probably won't be able to see it any more, but in the future when you see this error, take note of the time that the statement was executed, then query the v_monitor.query_requests system table to see what other DDL statements were being executed at the same time.
If this was just an ad hoc INSERT statement, then the solution is to do exactly what the Vertica notice message said, re-issue the query. If this is part of a script or application logic, then you need to handle this error accordingly and add the logic to re-issue the statement again if this error is thrown.
I am having a trigger issue resulting in the following error being returned to the updating application (Datalinx WHM is Updating a Sage1000 table):
'A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active'
The offending trigger is one of 7 on the table and all currently have SET NOCOUNT ON; set. Disabling and re-enabling the triggers one by one has not turned up anything to conclusively identify which trigger is causing the issue which makes me wonder whether it is the volume of triggers and the time it takes for them to fire causing the problem (I don't know enough to know whether that is feasible?).
Having got nowhere with with the above so far i have turned to SQL trace to try and narrow it down, which leads me to the following question:
I have enabled events for Stored procedures (SP:StmtStarting - SP:StmtCompleted) and errors amongst others (please see below for full list event settings)so i can see the table update and the subsequent error being returned (i Cant attache the trace or an image as a newbie) but it doesn't show the trigger names firing so i know which one is at fault. Are there other events i can select which may help?
Thanks in advance
Trace events selected:
Cursors: CursorClose, CursorExecute, CursorOpen, CursorPrepare
Errors and warnings: Attention, ErrorLog, EventLog, User Error Message
Stored procedures: RPC:Completed, SP:Completed, SP:Starting, SP:StmtCompleted, SP:StmtStarting
TSQL: Exec Prepared SQL, Prepare SQL, SQL:StmtCompleted, SQL:StmtStarting
Do you know how to obtain the text of sql statement inside a trigger?
Thanks
Sorry to be the bearer of sad tidings, but I don't believe you can do that. No triggering event I'm aware of has visibility on the SQL statement text. The triggering events supported by Oracle (11g) are:
An INSERT, UPDATE, or DELETE statement on a specific table (or view, in some cases)
A CREATE, ALTER, or DROP statement on any schema object
A database startup or instance shutdown
A specific error message or any error message
A user logon or logoff
None of these, as far as I'm aware, have access to the text of the SQL statement. Docs here
I want that when I execute a query for example DELETE FROM Contact, and an error is raised during the transaction it should delete the rows that are able to be deleted raising all the relevant errors for the rows that cannot be deleted.
For SQL Server you are not going to break the atomicity of the Delete command within a single statement - even issued outside of an explicit transaction, you are going to be acting within an implicit one - e.g. all or nothing as you have seen.
Within the realms of an explicit transaction an error will by default roll back the entire transaction, but this can be altered to just try and rollback the single statement that errored within the overall transaction (of multiple statements) the setting for this is SET XACT_ABORT.
Since your delete is a single statement, the XACT_ABORT can not help you - the line will error and the delete will be rolled back.
If you know the error condition you are going to face (such as a FK constraint violation, then you could ensure you delete has a suitable where clause to not attempt to delete rows that you know will generate an error.
If you're using MySQL you can take advantage of the DELETE IGNORE syntax.
This is a feature which will depend entirely on which flavour of database you are using. Some will have it and some won't.
For instance, Oracle offers us the ability to log DML errors in bulk. The example in the documentation uses an INSERT statement but the same principle applies to any DML statement.
When running a stored procedure (from a .NET application) that does an INSERT and an UPDATE, I sometimes (but not that often, really) and randomly get this error:
ERROR [40001] [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]Your server command (family id #0, process id #46) encountered a deadlock situation. Please re-run your command.
How can I fix this?
Thanks.
Your best bet for solving you deadlocking issue is to set "print deadlock information" to on using
sp_configure "print deadlock information", 1
Everytime there is a deadlock this will print information about what processes were involved and what sql they were running at the time of the dead lock.
If your tables are using allpages locking. It can reduce deadlocks to switch to datarows or datapages locking. If you do this make sure to gather new stats on the tables and recreate indexes, views, stored procedures and triggers that access the tables that are changed. If you don't you will either get errors or not see the full benefits of the change depending on which ones are not recreated.
I have a set of long term apps which occasionally over lap table access and sybase will throw this error. If you check the sybase server log it will give you the complete info on why it happened. Like: The sql that was involved the two processes trying to get a lock. Usually one trying to read and the other doing something like a delete. In my case the apps are running in separate JVMs, so can't sychronize just have to clean up periodically.
Assuming that your tables are properly indexed (and that you are actually using those indexes - always worth checking via the query plan) you could try breaking the component parts of the SP down and wrapping them in separate transactions so that each unit of work is completed before the next one starts.
begin transaction
update mytable1
set mycolumn = "test"
where ID=1
commit transaction
go
begin transaction
insert into mytable2 (mycolumn) select mycolumn from mytable1 where ID = 1
commit transaction
go