"Transaction Log for database is full" SQL error - sql

I am trying to insert some values into an SQL database, but continuously receiving an error that "Transaction Log for database is full".
I checked it online and some people have suggested to commit work more frequently. I have done than with no changes.
I don't have authorization to change database server. Is this problem client-side or due to server? If due to client, what is the solution?

You are probably getting the error code SQL0964, aren't you?
Probably you execute your DML sentences when there is a lot of activity in the database, and you got this message because your sentences overpassed the log size limit.
The concurrent transactions have used all the available logs (primary and secondary) and your sentences cannot be written in the logs.
Also, if the database has archive log method, probably the file system is full and the database cannot archive the active logs, blocking you to write your sentences in the transaction logs.
Any of these situations can be solved by your DBA team. You can only change the way you insert the rows by doing commit more frequently (if currently a commit is each 500 rows, try a commit each 100 rows, something like that)
For more information about the problem: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.messages.sql.doc/doc/msql00964c.html

It's a server side issue, you should forward the messages to the DBA team.

Related

SQL Server MERGE on a large table with small log file

I am running a MERGE statement on a large table (5M of rows) with a small log file size (2GB). I am getting an error:
Merge for MyTable failed: The transaction log for database 'MyDb' is full due to 'ACTIVE_TRANSACTION'.
Could be this solved by another action than extending the log file? I can’t really afford to extend the log file currently.
If you have a fixed log file size, you have essentially two options:
Temporarily change the recovery mode of your database from FULL to BULK-LOGGED. You'll lose the ability to do point-in-time recovery during this period, but it allows you to quickly do the operation and then go back. There are other caveats, so you need to do some research to make sure this is what you want to do.
Instead of changing the transaction log, you can adopt a batching approach to commit small batches of changes at a time, thus allowing the log to flush as needed.

How to Avoid SQL Server hangs due to uncommited transaction caused by poor SW design

The problem: a .NET application trying to save many records to SQL Server. BeginTrans was used, and right before commit a warning messages shows to end user to confirm to proceed to save data or not. The user simply left the computer and go away!!!
Now all other users are unable to access the locked records. Sometimes almost the entire system is affected. Almost all transaction are updating the same records; the confirmation message must be shown after data gets updated, and before commit so if user can rollback. What could be the best solution?
If no solution is found, the last thing i might do is to rollback, show the confirmation message, if user accepts then i will again save the data without any confirmation message (which i don't thing the right way)
My question is: What best i can do? any ideas?
This sounds like a WinForms app? It also sounds like you want to confirm the intent of user's action. Are you in a position to only start the transaction once they confirm they intend to save the data?
Ideally, you should
Prompt the user via [OK | Cancel]
Perform the database transaction
If the result of the transaction is deadlock (or any other failure), inform the user the save operation failed
In other words, the update of records should be a synchronous call.
EDIT: after understanding the specifics as mentioned in the comment below, I would recommend some form of server side task queue that all these requests would need to flow through. Your client would submit a request to the server, and the server application would then become the software responsible for updating records in the database. The clients would make their requests to this application and would be processed in the order they were received. I don't have much experience with inventory tracking software, but understand it's need to be absolutely correct. So this is just a rough idea, I'm sure someone with more experience in inventory tracking will have a better pattern. The proposed pattern creates a large bottleneck on the server that is responsible for updating the records. For example, this pattern would be terrible for someone like Amazon.

Transaction log shipping backup job remains after deleted database

I have been a lurker for several years and I think I have a question that hasn't been answered here.
We were in the middle of some pretty intense maintenance on or SQL server last night. Or primary database mdb file was very badly fragmented. We maintain a test copy of this database for testing and proof of concept purposes.
I had setup log shipping on the test database and without thinking I deleted the test database without removing the log shipping first. I am getting error 14421 - The log shipping secondary database SERVER.database has restore threshold of 45 minutes and is out of sync. No restore was performed for 10310 minutes. Restored latency is 0 minutes. Check agent log and logshipping monitor information.
I have removed everything I could with tsql. My research leads me to believe that this error is due to the backup job still trying to operate but I cannot find this job to remove it. It's really not a big deal but the error shows up every couple of minutes in the log.
Is there anything I can do?
Thanks in advance!
Log Shipping Information is stored in MSDB, not in the database itself. All you need to do is create a new database with the same name as the deleted database. Right click for properties, log ship tab and then uncheck the box to log ship the database. When you click okay the jobs(on primary and secondary) will be removed.

How do I undo an update statement I made to a database

It's a test environment, I needed some data to test an Update query, but accidentally updated a column in all rows to have wrong data. Must I use a backup to restore the data back to the previous instance, or is there some secret with transaction log that I can take advantage of?
Thanks in advance.
There is a non-secret transaction log called transaction log that you can recover from to a point in time. Here's how... That annoying little file with the ldf extension is the transaction log, as opposed to the .mdf file that is your normal db data.
Unless you have truncated the transaction log (ldf) or otherwise mucked with it, you should be able to do exactly the kind of restore (undo) that you're looking for.
If your database was in full recovery mode then you can try reading transaction log using third party tool such as this one or you can try doing this yourself with DBCC LOG command.
If db is in full recovery then a lot of data is stored in transaction log but it’s not easily readable because MS never polished official documentation for this and because it’s purpose is not recovery but making sure transaction is committed correctly.
However there are workarounds to reading it like using the tool above (paid tool unfortunately but has a trial) or decoding the results of DBCC LOG yourself.
Not unless you wrapped your sql in a transaction block - begin transaction, rollback, commit. That one of the dangerous things about sql server. With Oracle you have to physically commit each transaction which is much safer imho.

Log of Queries executed on SQL Server

i ran a t-sql query on SQL Server 2005 DB it was suppose to run for 4 days , but when i checked after 4 days i found the machine was abruptly closed. So i want to trace what happened with the query or in other words whats was the status of the query when machine went down
Is there a mechanism to check this
It may be very difficult to do such a post-mortem. But while it may not be possible to determine what portion of the long running query was active when the shutdown occured, it is important to try and find the root cause of the problem!
Depending on the way the query is written, SQL will attempt to roll-back any SQL that wasn't completed (or any uncommitted transaction if transactions were used). This will happen the first time the server is restarted; If you have any desire to analyze the SQL transaction log (yuck!) make a copy.
Before getting to the part of the query which may have been running at the time of the shutdown, it may be preferable to look into SQL logs (I mean application logs, with information messages, timestamps and such, not SQL transaction logs for a given database), in search of basic facts, and possibly of an error message, happening some time prior to the shutdown and that could plausibly be the origin of the problem. Do also check the Windows event logs, as they may indicate abnormal system-wide conditions that could be a cause or consequence of the SQL shutdown.
The SQL transaction log can be "decompiled" and used to understand what was readily updated/inserted/deleted in the database, however this type of info for a 4 days run may be burried quite deep and long and possibly intersped with unrelated queries. In fact an out of disk space for the SQL log could possibly cause the server to become erratic (I would expect a more elegant handling of such situation, but if somehow the drive in question was also the system drive, bad things could happen...)
Finaly, by analyzing the "4 days long" SQL script it may be possible to infer which parts were completed, thanks to various side-effects of the script. In nothing else, this review may allow putting back the database in a coherent state if necessary, or to truncate the script to exclude the parts readily completed, in preparation for a new run for completign the work.