All,
We often delete many rows from a table and even though we are using set rowcount 10000 most of the times we fill up the Transaction Log. Is there something to do to avoid this problem happening?
Thanks,
M
Two things you can do:
you can set your database's recovery model to SIMPLE - this will limit the amount of data being logged - that's only part of a fix, however
you need to establish frequent transaction log backups - especially just before and just after batch deletes.
This is really more of a sysadmin/DBA question, and thus you'll probably get more and more useful answers on http://serverfault.com.
before and after deleting data; you can run this query to empty transaction log:
dump tran SAMPLE_DB with truncate_only
go
Related
So, I can successfully run any SELECT statement, but doing any UPDATE statements just hang until they eventually time out. This occurs with trying to execute any stored procedures as well. Other users that connect to the database can run anything without running into this problem.
Is there a cache per user that I can dump or something along those lines? I usually get sick of waiting and cancel the operation, so I don't know if that has contributed to the problem or not.
Just for reference, it's things as simple as these:
UPDATE SOME_TABLE
SET SOME_COLUMN = 'TEST';
EXECUTE SOME_PROCEDURE(1234);
But this works:
SELECT * FROM SOME_TABLE; -- various WHERE clauses don't cause any problems.
UPDATE:
Probably a little disappointing for anyone who came here looking for an answer to a similar problem, but the issue ended up being twofold: The DBA didn't think it was important to give me many details, but there were limitations on the Oracle server that were intentionally set for procedures in general (temp space issues, and things of that ilk). And second, there was an update to the procedure that I wasn't aware of that'd run a sub-query for every record that's pulled in the query (thousands of records). That was removed and now it's running as expected.
In my experience this happens most often because there is another uncommitted operation on the table. For example: User 1 successfully issues an update but does not commit it or roll it back. User 2 (or even another session of User 1) issues another update which just hangs until the other pending update is committed or rolled back. You say that "other users" don't have the same problem, which makes me wonder if they are committing their changes. And if so, if they are updating the same table or a different one.
I have a question concerning transaction isolation in SQL Server. The default isolation level is set to 2 (READ_COMMITED). In the first transaction, I insert some data in table users; in the second, I try, unsuccessfully, to select all data from the same table, it seems that the second transaction waits for the first one to commit/rollback.
Does anyone have an explanation?
That is exactly what the read committed means, the second transaction will wait until it is able to read all the data. There are ways to either read the uncommitted data or skip it, but that is not something you would normally want to do.
There is quite a lot of material available for this, for example this one in TechNet
I have a SQL statement that updates records in a table if the query returns any records. The query only returns records if they need to be updated. When I run the select on the query I get no records so when the update runs there should be no records updated.
The problem I'm having is that the query in the stored procedure won't finish becuase the transaction log fills up before the query can complete. I'm not concerned about the transaction log filling up right now.
My question is, if there no records are being updated then why is anything being written to the transaction log?
We need more information before this problem can be solved ...
Remus has a great idea to look at the entries in the log file.
Executing DBCC SQLPERF(logspace) will give you how full the log file is.
Clear the log file using a transaction log backup. This is assuming the recovery model is FULL and a FULL backup has been done.
Re-run the update stored procedure. Look at the transaction log file entries.
A copy of the stored procedure and table definitions would be great. Looking for other processes (sp_who2) during then execution that might fill the log is another good place to look.
Any triggers that might cause updates, deletes or inserts can add to the log file size, suggested by Martin.
Good luck.
Looks like the issue was in the join. It was tyring to join so many records that tempdb was filling up to the point there was no more space on the drive.
I have a SQL Server 2008 database and I have a problem with this database that I don't understand.
The steps that caused the problems are:
I ran a SQL query to update a table called authors from another table called authorAff
The authors table is 123,385,300 records and the authorsAff table is 139,036,077
The query took about 7 days executing but it didn't finish
I decided to cancel the query to do it another way.
The connection on which I was running the query disconnected suddenly so the database became in recovery until the query cancels
The server was shut down many times afterwards because of some electricity problems
The database took about two days and then recovered.
Now when I run this query
SELECT TOP 1000 *
FROM AUTHORS WITH(READUNCOMMITTED)
It executes and returns the results but when I remove WITH(READUNCOMMITTED) hint it gets locked by a process running on the master database that appears only on the Activity Monitor with Command [DB STARTUP] and no results show up.
so what is the DB STARTUP command and if it's a problem, how can I solve it?
Thank you in advance.
I suspect that your user database is still trying to rollback the transaction that you canceled. A general rule of thumb indicates that it will take about the same amount of time, or more, for an aborted transaction to rollback as it has taken to run.
The rollback can't be avoided even with the SQL Server stops and starts you had.
The reason you can run a query WITH(READUNCOMMITTED) is because it's ignoring the locks associated with transaction that is rolling back. Your query results are considered unreliable, but ironically, the results are probably what you want to see since the blocking process is a rollback.
The best solution is to wait it out, if you can afford to do so. You may be able to find ways to kill the blocking process, but then you should be concerned with database integrity.
My work has a financial application, written in VB.NET with SQL, that several users can be working on at the same time.
At some point, one user might decide to Post the batch of entries that they (and possibly other people) are currently working on.
Obviously, I no longer want any other users to add, edit, or delete entries in that batch after the Post process has been initiated.
I have already seen that I can lock all data by opening the SQL transaction the moment the Post process starts, but the process can be fairly lengthy and I would prefer not to have the Transaction open for the several minutes it might take to complete the function.
Is there a way to lock just the records that I know need to be operated on from VB.NET code?
If you are using Oracle you would Select for update on the rows you are locking.
here is an example
SELECT address1 , city, country
FROM location
FOR UPDATE;
You probably want to set an isolation level for the entire transaction rather than using with (rowlock) on specific tables.
Look at this page:
http://msdn.microsoft.com/en-us/library/ms173763.aspx
Specifically, search within it for 'row lock', and I think you'll find that READ COMMITTED or REPEATABLE READ are what you want. READ COMMITTED is the SQL Server default. If READ COMMITTED doesn't seem strong enough to you, then go for REPEATABLE READ.
Update: After reading one of your follow up posts, you definitely want repeatable read. That will hold the lock until you either commit or rollback the transaction.
add
with (rowlock)
to your SQL query
SQL Server Performance article
EDIT: ok, I misunderstood the question. What you want is transaction isolation. +1 to Joel :)
wrap it in a tran use an holdlock + updlock in the select
example
begin tran
select * from
SomeTable (holdlock,updlock)
where ....
processing here
commit