SQL Server error "Could not continue scan with NOLOCK due to data movement." - sql

I am having an issue when running queries or stored procedures. Every time I run a query I get the following error:
Could not continue scan with NOLOCK due to data movement.
If I remove the WITH NOLOCK command, I get a different error:
Msg 824, Level 24, State 2, Line 1
SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:19818941; actual 1:19818957). It occurred during a read of page (1:19818941) in database ID 9 at offset 0x000025cd37a000 in file 'E:\SQLDATA\MSCRM.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
What should I do to resolve this error?

First, obviously, try DBCC CHECKDB.
If that cannot resolve the issue, you may need to restore from a backup and then manually copy over the most recent changes. Hopefully you have been doing nightly backups... ?

If the error is prefixed with any object (Proc, trigger, function), then you can just drop and create the object again or alter it if possible.

Related

Running updateSQL for the first time gives database returned ROLLBACK error DatabaseException

I am using Liquibase v3.9 with PostgreSQL v11 for the first time.
When testing out my changelog for the very first time I run updateSQL to see the output of the SQL that will be run against the database. I get this error:
Unexpected error running Liquibase: liquibase.exception.DatabaseException: org.postgresql.util.PSQLException: The database returned ROLLBACK, so the transaction cannot be committed. Transaction failure cause is <<ERROR: relation "public.databasechangeloglock" does not exist
Position: 22>>
For more information, please use the --logLevel flag
This happens because updateSQL is expecting databasechangelog table to exist, and if this is the first time you are running Liquibase against the database then those tables won't exist yet (they get created the first time you run liquibase update).
I do think this is a valid use case for running updateSQL, you can request this feature here:
https://github.com/liquibase/liquibase/issues
relation "public.databasechangeloglock" does not exist
I was with this issue using PostgreSQL in a container.
Then I realized the memory limit given to PostgreSQL was insufficient.
After Increasing PostgreSQL limit memory to 512MiB the problem was solved.

Error: Internal Query Processor Error: The query processor encountered an unexpected error during execution (HRESULT = 0x80040e19)

Please help me out with the solution.
When i try to execute the Update Command, I am getting the following error. I am using SQL Server 2012.
UPDATE WorkOrder SET Delivered = GETDATE() WHERE WONumber= 69375
Error: Internal Query Processor Error: The query processor encountered an unexpected error during execution (HRESULT = 0x80040e19)
In my issue with the same error, while running a DBCC checkdb command through SQL, I found issues with a few Tables.
I had to place the database in Single_User Mode,
run the – DBCC CHECKDB (‘xxxx’,REPAIR_REBUILD); –
(note: the ‘xxxx’ is the database name),
then place the database back in Multi_User Mode.
This resolved the issue for my Client.

Error 8921 running DBCC CHECKDB

DBCC CHECKDB shows following error message in my sql server 2008 R2 database
Msg 8921, Level 16, State 1, Line 2
Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors.
What is the solution?
Given that DBCC CHECKALLOC fails too, with no corruption messages, your database has corrupt metadata and you need to restore from your backups (the first thing it does is run some basic checks on the three critical system tables it needs, and if they're badly broken, it will fail with message 8921). You have no other choice here - you can't run repair as you can't get DBCC CHECKDB to run.
It's possible you could narrow down which system table is corrupt using DBCC CHECKTABLE on successive object IDs from sys.objects, and then manually edit around the corruption, but that's very advanced and has a very low chance of success.
If you don't have any backups, you're going to have to create a new database and then export all the schemas and data into the new database.

The transaction log for the database is full

I have a long running process that holds open a transaction for the full duration.
I have no control over the way this is executed.
Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.
So the process fails with the error "The transaction log for database 'xxx' is full".
I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.
Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.
Any ideas?
If anyone is interested, the process is an organisation import in Microsoft Dynamics CRM 4.0.
There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.
-=-=-=-=- UPDATE -=-=-=-=-
Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:
I am getting the following error...
Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
So following that advice I went to "log_reuse_wait_desc column in sys.databases" and it held the value "ACTIVE_TRANSACTION".
According to Microsoft:
http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx
That means the following:
A transaction is active (all recovery models).
• A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.
• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.
Have I misunderstood something?
-=-=-=- UPDATE 2 -=-=-=-
Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.
-=-=-=- Final UPDATE -=-=-=-
The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.
I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.
Thanks all for your input.
To fix this problem, change Recovery Model to Simple then Shrink Files Log
1.
Database Properties > Options > Recovery Model > Simple
2.
Database Tasks > Shrink > Files > Log
Done.
Then check your db log file size at
Database Properties > Files > Database Files > Path
To check full sql server log: open Log File Viewer at
SSMS > Database > Management > SQL Server Logs > Current
I had this error once and it ended up being the server's hard drive that run out of disk space.
Do you have Enable Autogrowth and Unrestricted File Growth both enabled for the log file? You can edit these via SSMS in "Database Properties > Files"
Is this a one time script, or regularly occurring job?
In the past, for special projects that temporarily require lots of space for the log file, I created a second log file and made it huge. Once the project is complete we then removed the extra log file.
This is an old school approach, but if you're performing an iterative update or insert operation in SQL, something that runs for a long time, it's a good idea to periodically (programmatically) call "checkpoint". Calling "checkpoint" causes SQL to write to disk all of those memory-only changes (dirty pages, they're called) and items stored in the transaction log. This has the effect of cleaning out your transaction log periodically, thus preventing problems like the one described.
Try this:
USE YourDB;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDB
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 50 MB.
DBCC SHRINKFILE (YourDB_log, 50);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDB
SET RECOVERY FULL;
GO
I hope it helps.
The following will truncate the log.
USE [yourdbname]
GO
-- TRUNCATE TRANSACTION LOG --
DBCC SHRINKFILE(yourdbname_log, 1)
BACKUP LOG yourdbname WITH TRUNCATE_ONLY
DBCC SHRINKFILE(yourdbname_log, 1)
GO
-- CHECK DATABASE HEALTH --
ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN RETURN 0 END
GO
If your database recovery model is full and you didn't have a log backup maintenance plan, you will get this error because the transaction log becomes full due to LOG_BACKUP.
This will prevent any action on this database (e.g. shrink), and the SQL Server Database Engine will raise a 9002 error.
To overcome this behavior I advise you to check this The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP that shows detailed steps to solve the issue.
I met the error: "The transaction log for database '...' is full due to 'ACTIVE_TRANSACTION' while deleting old rows from tables of my database for freeing disk space. I realized that this error would occur if the number of rows to be deleted was bigger than 1000000 in my case. So instead of using 1 DELETE statement, i divided the delete task by using DELETE TOP (1000000).... statement.
For example:
instead of using this statement:
DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
using following statement repeatedly:
DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
adding up to the answers above, I also want to mention that, if possible, u can also free up the server to fix this issue. If the server is already full due to the database overflow, u can delete some unnecessary files from the SERVER where ur DB is built upon. At least this temporarily fixes the issue and lets you to query the DB
My problem solved with multiple execute of limited deletes like
Before
DELETE FROM TableName WHERE Condition
After
DELETE TOP(1000) FROM TableName WHERECondition
The answer to the question is not deleting the rows from a table but it is the the tempDB space that is being taken up due to an active transaction. this happens mostly when there is a merge (upsert) is being run where we try to insert update and delete the transactions. The only option is is to make sure the DB is set to simple recovery model and also increase the file to the maximum space (Add an other file group). Although this has its own advantages and disadvantages these are the only options.
The other option that you have is to split the merge(upsert) into two operations. one that does the insert and the other that does the update and delete.
Here's my hero code. I've faced this problem. And use this code to fix this.
USE master;
SELECT
name, log_reuse_wait, log_reuse_wait_desc, is_cdc_enabled
FROM
sys.databases
WHERE
name = 'XX_System';
SELECT DATABASEPROPERTYEX('XX_System', 'IsPublished');
USE XX_System;
EXEC sp_repldone null, null, 0,0,1;
EXEC sp_removedbreplication XX_System;
DBCC OPENTRAN;
DBCC SQLPERF(LOGSPACE);
EXEC sp_replcounters;
DBCC SQLPERF(LOGSPACE);
Solved: As per the error the free space left the in the drive is not sufficient.
to resolve it either you can extend the drive space or move the MDF/LDF/LOG file to the drive with enough space.
Note: check the existing path from below steps
Database properties -> Select File option
enter image description here
Try this:
If possible restart the services MSSQLSERVER and SQLSERVERAGENT.

vsdbcmd data loss may occur, but where?

When using vsdbcmd to deploy my database:
vsdbcmd.exe /a:Deploy /manifest:MyDatabase.deploymanifest
I Get:
SQL01268 .Net SqlClient Data Provider: Msg 50000, Level 16, State 127, Line 6 Rows were detected. The schema update is terminating because data loss might occur.
SQL01268 An error occurred while the batch was being executed.
Which is fine, but it doesn't tell me where the dataloss will happen. In order to find out I have to use <DeployToScript>True</DeployToScript>, then load the script up to see:
IF EXISTS (select top 1 1 from [dbo].[MyTable])
RAISERROR ('Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
Is there a way to get vsdbcmd to display this info when deploying direct to the DB without having to generate the sql first?
Thanks
There is no way to do this, it's a bug (or missing feature). See Tom's comment to my question.
For me I was needed to empty my DB before deploying SQL