In a database for a forum I mistakenly set the body to nvarchar(MAX). Well, someone posted the Encyclopedia Britanica, of course. So now there is a forum topic that won't load because of this one post. I have identified the post and ran a delete query on it but for some reason the query just sits and spins. I have let it go for a couple hours and it just sits there. Eventually it will time out.
I have tried editing the body of the post as well but that also sits and hangs. When I sit and let my query run the entire database hangs so I shut down the site in the mean time to prevent further requests while it does it's thinking. If I cancel my query then the site resumes as normal and all queries for records that don't involve the one in question work fantastically.
Has anyone else had this issue? Is there an easy way to smash this evil record to bits?
Update: Sorry, the version of SQL Server is 2008.
Here is the query I am running to delete the record:
DELETE FROM [u413].[replies] WHERE replyID=13461
I have also tried deleting the topic itself which has a relationship to replies and deletes on topics cascade to the related replies. This hangs as well.
Option 1. Depends on how big the table itself and how big are the rows.
Copy data to a new table:
SELECT *
INTO tempTable
FROM replies WITH (NOLOCK)
WHERE replyID != 13461
Although it will take time, table should not be locked during the copy process
Drop old table
DROP TABLE replies
Before you drop:
- script current indexes and triggers so you are able to recreate them later
- script and drop all the foreign keys to the table
Rename the new table
sp_rename 'tempTable', 'replies'
Recreate all the foreign keys, indexes and triggers.
Option 2. Partitioning.
Add a new bit column, called let's say 'Partition', set to 0 for all rows except the bad one. Set it to 1 for bad one.
Create partitioning function so there would be two partitions 0 and 1.
Create a temp table with the same structure as the original table.
Switch partition 1 from original table to the new temp table.
Drop temp table.
Remove partitioning from the source table and remove new column.
Partitioning topic is not simple. There are some examples in the internet, e.g. Partition switching in SQL Server 2005
Start by checking if your transaction is being blocked by another process. To do this, you can run this command..
SELECT * FROM sys.dm_os_waiting_tasks WHERE session_id = {spid}
Replace {spid} with the correct spid number of the connection running your DELETE command. To get that value, run SELECT ##spid before the DELETE command.
If the column sys.dm_os_waiting_tasks.blocking_session_id has a value, you can use activity monitor to see what that process is doing.
To open activity monitor, right-click on the server name in SSMS' Object Explorer and choose Activity Monitor. The Processes and Resource Waits sections are the ones you want.
Since you're having issues deleting the record and recreating the table, have you tried updating the record?
Something like (changing "body" field name to whatever it is in the table):
update [u413].[replies] set body='' WHERE replyID=13461
Once you clear out the text from that single reply record you should be able to alter the data type of the column to set an upper bound. Something like:
alter table [u413].[replies] alter column body nvarchar(100)
Related
I need to alter the size of a column on a large table (millions of rows). It will be set to a nvarchar(n) rather than nvarchar(max), so from what I understand, it will not be a long change. But since I will be doing this on production I wanted to understand the ramifications in case it does take long.
Should I just hit F5 from SSMS like I execute normal queries? What happens if my machine crashes? Or goes to sleep? What's the general best practice for doing long running updates? Should it be scheduled as a job on the server maybe?
Thanks
Please DO NOT just hit F5. I did this once and lost all the data in the table. Depending on the change, the update statement that is created for you actually stores the data in memory, drops the table, creates the new one that has the change you want, and populates the data from memory. However in my case one of the changes I made was adding a unique constraint so the population failed, and as the statement was over the data in memory was dropped. This left me with the new empty table.
I would create the table you are changing, with the change(s) you want, as a new table. Then select * into the new table, then re-name the tables in a single statement. If there is potential for data to be entered into the table while this is running and that is an issue, you may want to lock the table.
Depending on the size of the table and duration of the statement, you may want to save the locking and re-naming for later, and after the initial population of the new table do a differential population of new data and re-name the tables.
Sorry for the long post.
Edit:
Also, if the connection times out due to duration, then run the insert statement locally on the DB server. You could also create a job and run that, however it is essentially the same thing.
This is probably laughably easy for an SQL expert, but SQL (although I can use it) is not really my thing.
I've got a table in a DB. (Let's call it COMPUTERS)
About 10.000 rows. 25 columns. 1 unique key: Column ASSETS.
Occasionally an external program will delete 1 or more of the rows, but isn't supposed to do that, because we still need to know some info from those rows before we can really delete the items.
We can't control the behavior of the external application so we came up with a different idea:
We want to create a second identical table (COMPUTERS_BACKUP) and initially fill this with a one-on-one copy of COMPUTERS.
After that, once a day copy new records from COMPUTERS to COMPUTERS_BACKUP and update those records in COMPUTERS_BACKUP where the original in COMPUTERS has changed (ASSETS column will never change).
That way we keep the last state of a record deleted from COMPUTERS.
Can someone supply the code for a stored procedure that can be scheduled to run once a day? I can probably figure this out myself, but it would take me several hours or so and I'm very pressed for time.
just create a trigger for insert computers table
CREATE TRIGGER newComputer
ON [Computers]
AFTER INSERT
Begin
INSERT INTO COMPUTERS_BACKUP
SELECT * FROM Inserted
End
It'll work when you insert new computer to computers table and it'll also insert the record to bakcup table
When you update computers you could change computers backup too with update trigger
CREATE TRIGGER newComputer
ON [Computers]
AFTER UPDATE
Begin
//can access before updating the record through SELECT * FROM Deleted
//can access after updating the record through SELECT * FROM Inserted
UPDATE Computers_BACKUP SET
(attributes) = inserted.(attribute)
WHERE id = inserted.id
End
At the end I guess you don't want to delete the backup when original record is deleted from computers table. You can chech more examples from msdn using triggers.
When a record removed from computers table
CREATE TRIGGER computerDeleted ON [Computers] AFTER DELETE
Begin
INSERT INTO Computers_BACKUP
SELECT * FROM Deleted
End
Besides creating triggers, you may look into enabling Change Data Capture, which is available in SQL Server Enterprise Edition. It may be an overshot, but it should be mentioned and you may find it useful for other tables and objects.
IMHO a possible solution, if you never delete records (only update) from that table in your application, can be to introduce an INSTEAD OF DELETE trigger
CREATE TRIGGER tg_computers_delete ON computers
INSTEAD OF DELETE AS
DELETE computers WHERE 1=2;
It will prevent the deletion of the records.
Here is SQLFiddle demo.
A trigger for Before Delete event can help you to guard this table:
CREATE TRIGGER backup_row_before_delete ON COMPUTERS_Table FOR Delete
as
INSERT INTO Computers_Backup
SELECT deleted.* from deleted
You can change deleted.* for deleted.col1, deleted.col2 if you want to keep certain columns only.
will delete 1 or more of the rows, but isn't supposed to do that
Then you have permission and integrity issues.
You can most certainly use a trigger to record deletions (and updates of course) but I would not recommend you use it purely to keep a copy of stuff you didn't want deleted in the first place!
Remove delete permissions if you have to or beef up your data integrity if you can. Without your schema it's hard to tell exactly how though.
Finally, use your (INSTEAD OF) trigger to check whatever conditions you need to prevent the delete when appropriate.
We are using SQL Server 2008. We have an Existing database and it was required to ADD a new COLUMN to one of the Table which has 2700 rows only but one of its column is of type VARCHAR(8000). When i try to add new column (CHAR(1) NULL) by using ALTER table command, it takes too much time!! it took 5 minutes and the command was still running to i stopped the command.
Below is the command, i was trying to add new column:
ALTER TABLE myTable Add ColumnName CHAR(1) NULL
Can someone help me to understand that How the SQL Server handles
the ALTER Table command? what happens exactly?
Why it takes so much time to Add new column
EDIT:
What is the effect of Table size on ALTER Command?
Altering a table requires a schema lock. Many other operations require the same lock too. After all, it wouldn't make sense to add a column halfway a select statement.
So a likely explanation is that a process had the table locked for 5 minutes. The ALTER then has to wait until it gets the lock itself.
You can see blocked processes, and the blocking process, from the Activity Monitor in SQL Server Management Studio.
Well, one thing to bear in mind is that you were adding a new fixed length column to the table. The way that rows are structured in storage, all fixed length columns are placed before all of the variable length columns, for each row. So every row would have had to be updated in storage to make this change.
If, in turn, this caused the number of rows which could be stored on each page to change, a great many new allocations may have been required.
That being said, for the number of rows indicated, I wouldn't have though it should take 5 minutes - unless, as Andomar indicated, there was some lock contention also involved.
I need a query that will alter a single column from nvarchar(max) to 32. The real problem is this table has 800,000 rows. And my alter table myTable alter column mycolumn statement times out. Any suggestions or tips?
Maybe adding a new column, then selecting the data in the new column, and then remove the old column and rename the new column with the original name will help.
Another simpler approach would be to create a new table with the specifications as needed and then do select .. into.. After this is completed the old table can be dropped.
If you run a SQL Script in SSMS it has no timeout set. You can only get a timeout using c# etc, and it's the default 30 second CommandTimeout.
I would suggest changing the timeout to 3600 for example, or running it in SSMS.
The other thing to think of: this change will be logged so it can rollback. Make sure you resize the log file upfront to a respectable size so it doesn't have to grow by 10% each time (when the changes you are making use us current log space).
Or combine this with codymanix's answer
Two things I can think of to try:
first do an UPDATE truncating the data to 32 characters; this might help the ALTER run more quickly, since it won't have to do any truncation itself. The UPDATE could be batched if necessary
Or
Create a new nvarchar(32) column with a temporary name
Populate it from the nvarchar(max) column
DROP the nvarchar(max) column
Rename the (32) column to the original name of the (max) column
See this.
You can also specify the timeout counter or just disable it via GUI.
When you execute the statement, open another copy of SSMS, and run the statement
sp_who2
That will show you, among other things, a column called "BlkBy". That's the SPID of a process which may be blocking your query from completing. You may have an open transaction somewhere else in the system. If you know what that process is, and you know it won't blow up your universe, kill it.
I have a small (200 rows / 400kb) table with 4 columns - nvarchar(MAX), nvarchar(50), and two ints. I'm having a problem with one particular row in which I can select and update the int fields, but when I attempt to select or update the nvarchar fields, the query runs indefinitely (at least 45 minutes before I cancel). I'm also unable to delete this row, or even truncate the table (again, the query runs indefinitely).
Any ideas? One of the int fields has a primary key, but there are no foreign keys.
Looks like you have an uncommitted transaction locking things down.
You can free them up through the Activity Monitor. It is located in the Management folder of the database you are looking at.
Expand that, right click on Activity Monitor, and select View Processes. You can right click on processes and kill them there.
This isn't always the best solution, especially with a production database. You generally want to figure out why the transaction is not committed, and either manually roll it back or commit it.
Are you sure that row isn't locked? Open a new connection, run your select query, note the SPID from either the top of the window 2000 / 2005 and the bottom 2008 management studio. In another window run sp_who2. Find the spid from the query running the record.
If you don't care about uncommitted data, or just want to test the row, do:
select * from table with (nolock) where key = 'mykey'
if DBCC CHECKDB doenst't help like Thirster42 said
i would just copy data except problem row into newTable and drop the oldTable...
also i would check size of your TempDB.
Check Profiler, maybe there is trigger being fired?
See Execution plan for anything unusual..