SQL Server Replication Job Error: Message: Command Text: SELECT TOP 1 0 FROM [dbo].[tablename] WITH (TABLOCK, HOLDLOCK) - sql-server-2016

SQL Snapshot replication is failing due to this message:
Message: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Command Text: SELECT TOP 1 0 FROM [dbo].[tablename] WITH (TABLOCK, HOLDLOCK)
I'm not quite sure where to start finding this issue. I've searched Google and SO, I didn't find anything that helps me get to the bottom of this. Is this just an issue with a table either being locked or not locked when it should be?
Questions/Answers:
What part of the process fails, creating a snapshot or the log reader? The error is in creating the snapshot.
Is the error in the replication monitor and is the source the publisher or subscriber? The error is in the replication monitor and it is the publisher.
Desired Result:
No error, snapshot replication not timing out.
Thank you.

Related

Azure Synapse Serverless Pool Database Error: Lock request time out period exceeded Error:1222

When I attempt to delete a view in Serverless Pool Database from SSMS Version 18.4 I get the following error:
Lock request time out period exceeded Error:1222
Can someone let me know how to overcome this issue?
Lock request time out period exceeded Error:1222
A query waits longer than the lock timeout setting, as shown by the error message "lock request time out period exceeded" (error 1222). The lock timeout parameter controls the amount of time, in milliseconds, that a query must wait before returning an error on a blocked resource.
SELECT * FROM sys.dm_exec_sessions where open_transaction_count=1;
The above query obtains active transaction information for the current database using sys.dm_exec_sessions view.
Then kill that process using the following command.
Kill 129
Make sure that every BEGIN TRANSACTION contains a COMMIT command to avoid this.
The following will indicate success but leave transactions uncommitted:
BEGIN TRANSACTION
BEGIN TRANSACTION
--SQL_CODE?
COMMIT
Closing query windows with uncommitted transactions will prompt you to commit your transactions.

Unable to drop or truncate sql server table

We are using azure data flows and we are trying to load the data in one particular table in sql server. however our data flow keeps running for hours for smaller set of data.
when we tried to truncate or drop the table, our request times out.
how can we force drop and recreate the table.
What I checked
We don't have any foreign constraints that may avoid the drop
Getting this error when i tried to truncate
Failed to execute query. Error: A severe error occurred on the current command. The results, if any, should be discarded.
I also ran this query and found this
Query -
SELECT session_id
,blocking_session_id
,wait_time
,wait_type
,last_wait_type
,wait_resource
,transaction_isolation_level
,lock_timeout
FROM sys.dm_exec_requests
WHERE blocking_session_id <> 0
Is this causing the issue
How can I fix this issue
i found this after running
exec sp_who 88
What can i do on it
find out more about blocking session 88 , run exec sp_who 88 , seems like this is the session that is blocking , find out more about blocking stuff , if you are allowed to add a proc to the database , go get and install sp_whoisactive which gives you more information
then you can run : dbcc inputnuffer(88) to find out which main proc or process is executing that select query.
if this is safe to kill that process, you can kill that session by
kill 88
before killing that session make sure that session id is still running the same process

Sybase kill process from Interactive SQL

I am trying to kill a Sybase process but no success.
sp_who returns, among others the line:
fid,spid,status,loginame,origname,hostname,blk_spid,dbname,tempdbname,cmd,block_xloid,threadpool
' 0',' 14','running','sa','sa','server',' 0','DBSOTEST','tempdb','INSERT',' 0','syb_default_pool'
If I try to kill this process (kill 14) I have the error:
Could not execute statement.
You cannot use KILL to kill your own
process. Sybase error code=6104 Severity Level=16, State=1,
Transaction State=1 Line 1
select syb_quit() exists from my session but the process is not stopped.
Observation:
After a restart of Sybase server the process is there. It is this normal? I do not have any insert command that is running, or any other program that does the insert.
Any insert command in the any table of the DB does not work.
Any select command works.
How can I obtain the permission to insert in the tables of my database?
There seems to be two questions combined: one about killing and one about permissions. Please raise separate questions for there.
As for the killing, your own process will always be there the moment you connect to the ASE server. And as the error message says, you cannot kill yourself.
When there are errors with inserting etc., at least post the error messages. Or talk to your DBA.

The transaction log for the database is full

I have a long running process that holds open a transaction for the full duration.
I have no control over the way this is executed.
Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.
So the process fails with the error "The transaction log for database 'xxx' is full".
I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.
Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.
Any ideas?
If anyone is interested, the process is an organisation import in Microsoft Dynamics CRM 4.0.
There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.
-=-=-=-=- UPDATE -=-=-=-=-
Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:
I am getting the following error...
Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
So following that advice I went to "log_reuse_wait_desc column in sys.databases" and it held the value "ACTIVE_TRANSACTION".
According to Microsoft:
http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx
That means the following:
A transaction is active (all recovery models).
• A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.
• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.
Have I misunderstood something?
-=-=-=- UPDATE 2 -=-=-=-
Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.
-=-=-=- Final UPDATE -=-=-=-
The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.
I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.
Thanks all for your input.
To fix this problem, change Recovery Model to Simple then Shrink Files Log
1.
Database Properties > Options > Recovery Model > Simple
2.
Database Tasks > Shrink > Files > Log
Done.
Then check your db log file size at
Database Properties > Files > Database Files > Path
To check full sql server log: open Log File Viewer at
SSMS > Database > Management > SQL Server Logs > Current
I had this error once and it ended up being the server's hard drive that run out of disk space.
Do you have Enable Autogrowth and Unrestricted File Growth both enabled for the log file? You can edit these via SSMS in "Database Properties > Files"
Is this a one time script, or regularly occurring job?
In the past, for special projects that temporarily require lots of space for the log file, I created a second log file and made it huge. Once the project is complete we then removed the extra log file.
This is an old school approach, but if you're performing an iterative update or insert operation in SQL, something that runs for a long time, it's a good idea to periodically (programmatically) call "checkpoint". Calling "checkpoint" causes SQL to write to disk all of those memory-only changes (dirty pages, they're called) and items stored in the transaction log. This has the effect of cleaning out your transaction log periodically, thus preventing problems like the one described.
Try this:
USE YourDB;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDB
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 50 MB.
DBCC SHRINKFILE (YourDB_log, 50);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDB
SET RECOVERY FULL;
GO
I hope it helps.
The following will truncate the log.
USE [yourdbname]
GO
-- TRUNCATE TRANSACTION LOG --
DBCC SHRINKFILE(yourdbname_log, 1)
BACKUP LOG yourdbname WITH TRUNCATE_ONLY
DBCC SHRINKFILE(yourdbname_log, 1)
GO
-- CHECK DATABASE HEALTH --
ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN RETURN 0 END
GO
If your database recovery model is full and you didn't have a log backup maintenance plan, you will get this error because the transaction log becomes full due to LOG_BACKUP.
This will prevent any action on this database (e.g. shrink), and the SQL Server Database Engine will raise a 9002 error.
To overcome this behavior I advise you to check this The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP that shows detailed steps to solve the issue.
I met the error: "The transaction log for database '...' is full due to 'ACTIVE_TRANSACTION' while deleting old rows from tables of my database for freeing disk space. I realized that this error would occur if the number of rows to be deleted was bigger than 1000000 in my case. So instead of using 1 DELETE statement, i divided the delete task by using DELETE TOP (1000000).... statement.
For example:
instead of using this statement:
DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
using following statement repeatedly:
DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
adding up to the answers above, I also want to mention that, if possible, u can also free up the server to fix this issue. If the server is already full due to the database overflow, u can delete some unnecessary files from the SERVER where ur DB is built upon. At least this temporarily fixes the issue and lets you to query the DB
My problem solved with multiple execute of limited deletes like
Before
DELETE FROM TableName WHERE Condition
After
DELETE TOP(1000) FROM TableName WHERECondition
The answer to the question is not deleting the rows from a table but it is the the tempDB space that is being taken up due to an active transaction. this happens mostly when there is a merge (upsert) is being run where we try to insert update and delete the transactions. The only option is is to make sure the DB is set to simple recovery model and also increase the file to the maximum space (Add an other file group). Although this has its own advantages and disadvantages these are the only options.
The other option that you have is to split the merge(upsert) into two operations. one that does the insert and the other that does the update and delete.
Here's my hero code. I've faced this problem. And use this code to fix this.
USE master;
SELECT
name, log_reuse_wait, log_reuse_wait_desc, is_cdc_enabled
FROM
sys.databases
WHERE
name = 'XX_System';
SELECT DATABASEPROPERTYEX('XX_System', 'IsPublished');
USE XX_System;
EXEC sp_repldone null, null, 0,0,1;
EXEC sp_removedbreplication XX_System;
DBCC OPENTRAN;
DBCC SQLPERF(LOGSPACE);
EXEC sp_replcounters;
DBCC SQLPERF(LOGSPACE);
Solved: As per the error the free space left the in the drive is not sufficient.
to resolve it either you can extend the drive space or move the MDF/LDF/LOG file to the drive with enough space.
Note: check the existing path from below steps
Database properties -> Select File option
enter image description here
Try this:
If possible restart the services MSSQLSERVER and SQLSERVERAGENT.

SQL Server error "Could not continue scan with NOLOCK due to data movement."

I am having an issue when running queries or stored procedures. Every time I run a query I get the following error:
Could not continue scan with NOLOCK due to data movement.
If I remove the WITH NOLOCK command, I get a different error:
Msg 824, Level 24, State 2, Line 1
SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:19818941; actual 1:19818957). It occurred during a read of page (1:19818941) in database ID 9 at offset 0x000025cd37a000 in file 'E:\SQLDATA\MSCRM.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
What should I do to resolve this error?
First, obviously, try DBCC CHECKDB.
If that cannot resolve the issue, you may need to restore from a backup and then manually copy over the most recent changes. Hopefully you have been doing nightly backups... ?
If the error is prefixed with any object (Proc, trigger, function), then you can just drop and create the object again or alter it if possible.