does mysql undo changes if you kill thread? - sql

i ran a delete rows query. it was taking too long so i killed the thread from administration. did it undo the changes to the table/?

InnoDB will rollback the transaction.
MyISAM will leave the changes and possible even the table in corrupt state.

Related

Deadlock in SQL Server 2008 | INSERT (from application, EF) & SELECT (from stored procedure) statement working simultaneously

Program to insert into my 2 tables is written through Entity Framework and to SELECT the data is through a STORED PROC at SQL SERVER level. There is a point when SELECT and INSERT is getting done at the same time simultaneously. And when hitting that point, I got the below error:
Transaction (Process ID) was deadlocked on resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
How can I get rid of this DEADLOCK problem here? Need the best way to solve it.
Option 1: Implementing NOLOCK? What would be the PROS and CONS here for it?
Option 2: IF there is any way to exceed the DEADLOCK wait time so that it can wait for the resource for a longer time than usually it does? If yes, then HOW?
Option 3: Suggest Me?
Thanks,
Rahuul Dutta
A deadlock cannot be cured by increasing lock timeout. The resources are locked in such a way that it cannot be resolved by itself, regardless of how much time you can give it. A special background process in SQL Server, a deadlock monitor, periodically (rather often, actually) runs and if it identifies a deadlock it kills the 'lighter' transaction immediatelly.
The deadlocks are usually dealt with in one of several ways: by providing an alternative data access path for the SELECT query (ie adding a mnnclustered index), minimizing the transaction duration (by better indexing, again), or using one of snapshot isolation levels.
The least effort solution here will be setting the read committed snapshot isolation level. This way the SELECT query will not issue any shared locks on data, but still read only the committed data, which is a huge plus over using the NOLOCK hint (or read uncommitted isolation level).
You can change your transaction isolation level. Best option for deadlocks would be snapshot isolation i think. If you cannot turn this option on in your server or if you run into I/O issues, read committed should still prevent deadlocks from read/write dependencies. Make sure that you don't run into anomalies, read committed will allow non-repeatable reads and phantom reads.
First of all, thanks a lot for your precious answers!
With the help of your answers, some research and a call with Microsoft DBA team, I have got the following solution.
Solution: To implement this solution we have to change the database property to Read Committed Snapshot. This will help the Select statements in avoiding the blocks in case of locks by other sessions on the same table.
- To cater this solution the database will create a snapshot of the data in tempdb. Therefore we must have sufficient space in tempdb. Also if possible we must shift the tempdb to a new disk to split the I/O. This will improve the performance.
The following kb article helps in enabling Read Committed Snapshot property of the database:
http://technet.microsoft.com/en-us/library/ms175095(v=SQL.105).aspx
Alternately we can change this property through SSMS by right clicking the database---options---Miscellaneous----Is Read Committed Snapshot On. We have to change the value of this property to TRUE.
We do not have to restart the server to enable this property however we must note that 'When setting the READ_COMMITTED_SNAPSHOT option, only the connection executing the ALTER DATABASE command is allowed in the database. There must be no other open connection in the database until ALTER DATABASE is complete. The database does not have to be in single-user mode.'
This means we need a small amount of downtime from the application side.
Hope the MOM above would help you all too. :)
Thanks,
Rahuul Dutta

Oracle undo tablespace

I used toad to connect to Oracle, issued a select query and while exiting it asked to issue commit or rollback. I pressed the escape key and the message box disappeared. Then I ended the connection. Will this cause any problems to the tables I queried? Will it cause the undo tablespace or rollback segment to go out of control?
Thanks in advance.
This is what the Oracle PMON (Process Monitor) process will handle. If a session is terminated in a disorderly manner, then the session will be left in a state that must be cleared up. The PMON process will then roll back any active transactions.
So to answer your question: No, your transaction is no longer active and all your changes were rolled back (if you did not explicitly commit).

Updating on commit to avoid deadlocks

I have a table that tracks the last update time of another table's partitions so our reconciler need only check the partitions that have been updated since the last reconcile. There are multiple threads updating the partitioned table and therefore updating the same row of the latest update time table several times each. This is obviously causing deadlocks. Is there a way to prevent these deadlocks by only updating once on commit?
I was thinking of maybe using a session local temporary table, but not sure how to transfer the values to the global table on commit.
There is no way to trigger a process on commit so that approach probably won't work.
Potentially, you could have each of the writer processes write to an Oracle Advanced Queue (AQ) and then have another process that de-queues the messages and actually applies them to the current table. That would mean that there would be some lag between the writer session committing and the AQ processor picking up and processing the message but that lag shouldn't be too long. You could do the same thing by having each writer thread insert into a queue-like table and having a separate thread process that table if you don't want to use AQ.
I'm confused, though, by how the process you are describing could cause a deadlock. Are you really talking about a deadlock (i.e. an ORA-00060 error is thrown and a deadlock trace file is generated)? What you are describing should lead to blocking locks, not deadlocks, unless there is more going on than you have told us.

Should I run VACUUM in transaction or after?

I have a mobile application sync process. The transaction does a lot of modification on the database. Since this is done on mobile I need to issue a VACUUM to compact the database.
I am wondering when should I issue a VACUUM
in the transaction, as final statement
or after the transaction?
I am currently looking for SQLite, but if it's different for other engines, let me know in the answers (PostgreSQL, MySQL, Oracle, SQLServer)
Want it or not when using PostgreSQL you can't run VACUUM in transaction as stated in the manual:
VACUUM cannot be executed inside a transaction block.
I would say outside of the transaction. Certainly in PostgreSQL, VACUUM is designed to remove the "dead" tuples (i.e. the old row when a record has been changed or deleted.)
If you're running VACUUM in a transaction that has modified records, these dead rows won't have been marked for deletion.
Depending on which type of VACUUM you're doing, it may also require a table lock which will block if there are other transactions running, so you could potentially end up in a deadlock situation (transaction 1 is blocked waiting for a table lock to do its VACUUM, transaction 2 gets blocked waiting for a row to be released that transaction 1 has locked.)
I'd also recommend that this isn't done in an application (perhaps as a scheduled task) as it can take a while to complete and can negatively affect speed of other queries.
As for SQL Server, there is no VACUUM - what you're looking for is shrink. You can turn on auto shrink in 2005 which will automatically reclaim space when it the server decides, or issue a DBCC statement to shrink the database and log file, but this depends on your backup routine and strategy on a per-database level.
Vacuum is like defrag, it's good to do if youve recently deleted a lot of stuff, or maybe after youve inserted a lot of stuff, but by no means should you do it in every transaction. It's slower than almost any other database command and is more of a maintenance task.
We sometimes add/remove the majority of our db file, so then a vacuum would be a good idea, but I still would not consider it a part of the same transaction that did the work.
How frequently is the transaction run?
It's really a daily sort of process not a query by query process, but if you use it without full then it can be used in a transaction since it doesn't acquire a lock.
If your going to do it then it should be outside the transaction, since it is independent of the transactions data integrity.

Locking Row in SQL 2005-2008

Is there a way to lock a row in the SQL 2005-2008 database without starting a transaction, so other processes cannot update the row until it is unlocked?
You can use RowLock or other hints but you should be careful..
The HOLDLOCK hint will instruct SQL Server to hold the lock until you commit the transaction. The ROWLOCK hint will lock only this record and not issue a page or table lock.
The lock will also be released if you close your connection or it times out. I'd be VERY careful doing this since it will stop any SELECT statements that hit this row dead in their tracks. SQL Server has numerous locking hints that you can use. You can see them in Books Online when you search on either HOLDLOCK or ROWLOCK.
Everything you execute in the server happens in a transaction, either implicit or explicit.
You can not simply lock a row with no transaction (make the row read only). You can make the database read only, but not just one row.
Explain your purpose and it might be a better solution. Isolation levels and lock hints and row versioning.
Do you need to lock a row, or should Sql Server's Application locks do what you need?
An Application Locks is just a lock with a name that you can "lock", "unlock" and check if it is locked. see above link for details. (They get unlocked if your connection gets closed etc, so tend to clean themselfs up)