ALTER DATABASE ADD FILE fails due to running backup in azure sql Managed instance - azure-sql-managed-instance

We are running a monthly script for partition update. in the script we also run ALTER DATABASE ADD FILE and ALTER DATABASE REMOVE FILE. The script should be for running for about 20 minutes.
when running the script we are getting an error:
Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. Reissue the statement after the current backup or file manipulation operation is completed.
This error appear because there is an ongoing backup at the same time . the backup is done automatically by sql azure managed instance.
Since we don’t know the times of the backups we need a solution to be able to run our script without this error

Managed Instance runs log backup every 5 minutes (unless if you run your script while full backup is running) and file/encryption modification statements are not allowed in this period.
You could implement some retry logic or explicitly check are there any ongoing backup operations:
SELECT r.session_id,r.command,CONVERT(NUMERIC(6,2),r.percent_complete)
AS [Percent Complete],CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
Stmt = CONVERT(VARCHAR(1000), (SELECT SUBSTRING(text,r.statement_start_offset/2,
CASE WHEN r.statement_end_offset = -1 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END)
FROM sys.dm_exec_sql_text(sql_handle)))
FROM sys.dm_exec_requests r WHERE command LIKE '%BACKUP%'

Related

bulk insert works in ssms but not in other applications

Context: SQL Server 2005
I have a simple proc, which does a bulk load from an external file.
ALTER proc [dbo].[usp_test]
AS
IF OBJECT_ID('tempdb..#promo') is not null BEGIN
DROP TABLE #promo
END
CREATE TABLE #promo (promo VARCHAR(1000))
BULK INSERT #promo
FROM '\\server\c$\file.txt'
WITH
(
--FIELDTERMINATOR = '',
ROWTERMINATOR = '\n'
)
select * from #promo
I can run it in SSMS. But when I call it from another application (Reporting service 2005), it throws this error:
Cannot bulk load because the file "\server\c$\file.txt" could not be opened. Operating system error code 5 (Access is denied.).
Here is complicated because it may related to the account used by reporting service, or some windows security issue.
But I think I can maybe impersonate the login as the one I used to create the proc because the login can run it in SSMS. So tried to change the proc to 'with execute as self', it compiles ok, but when I tried to run it in SSMS, I got:
Msg 4834, Level 16, State 4, Procedure usp_test, Line 12
You do not have permission to use the bulk load statement.
I am still in the same session, so when I run this, it actually execute as the 'self', which is the login I am using now, so why I got this error? What should I do?
I know it's bit unclear so just list the facts.
========update
I just tried using SSIS to load the file into a table so that the report can use. The package runs ok in BIDS but when runs in sql agent job it got the same access to the file is denied error. Then I set up a proxy and let the package run under that account and the job runs no problem.
So I am thinking is it the account ssrs used can't access the file? What account is used by ssrs? Can ssrs be set up to run under a proxy like sql agent does?
==============update again
Finally got it sorted
I have created a SSIS package, put the package in a job (running under a proxy account to access the file), and in the proc to execute the job. This does work although tricky (need to judge whether the job has finished in the proc). This is too tricky to maintain, so just create as a proof of concept, will not go into production.
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/761b3c62-c636-407d-99b5-5928e42f3cd8/execute-as-problem?forum=transactsql
1) The reason you get the "You do not have permission to use the bulk load statement." is because (naturally) you don't have permissions to use the bulk load statement.
You must either be a sysadmin or a bulkadmin at the server level to run BULK commands.
2) Yes, "Access is denied" usually means whatever credentials you are using to run the sproc in SSRS does not have permissions to that file. So either:
Make the file available to everyone.
Set a known credential with full access to the file to the datasource running the sproc.
3) What the heck, dude.
Why not just use the text file directly as a data source in SSRS?
If that's not possible, why not perform all your ETL in one sproc run outside SSRS, and then just use a simple "select * from table" statement for SSRS?
Please do not run a BULK INSERT every time someone wants the report. If they need up to date reads of the file, use the file as a data source. If they can accept, say, a 10 minute lag in data, create a batch job or ETL process to pick the file up and put it into a database table every 10 minutes and just read from that. Write once, read many.

The transaction log for the database is full

I have a long running process that holds open a transaction for the full duration.
I have no control over the way this is executed.
Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.
So the process fails with the error "The transaction log for database 'xxx' is full".
I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.
Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.
Any ideas?
If anyone is interested, the process is an organisation import in Microsoft Dynamics CRM 4.0.
There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.
-=-=-=-=- UPDATE -=-=-=-=-
Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:
I am getting the following error...
Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
So following that advice I went to "log_reuse_wait_desc column in sys.databases" and it held the value "ACTIVE_TRANSACTION".
According to Microsoft:
http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx
That means the following:
A transaction is active (all recovery models).
• A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.
• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.
Have I misunderstood something?
-=-=-=- UPDATE 2 -=-=-=-
Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.
-=-=-=- Final UPDATE -=-=-=-
The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.
I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.
Thanks all for your input.
To fix this problem, change Recovery Model to Simple then Shrink Files Log
1.
Database Properties > Options > Recovery Model > Simple
2.
Database Tasks > Shrink > Files > Log
Done.
Then check your db log file size at
Database Properties > Files > Database Files > Path
To check full sql server log: open Log File Viewer at
SSMS > Database > Management > SQL Server Logs > Current
I had this error once and it ended up being the server's hard drive that run out of disk space.
Do you have Enable Autogrowth and Unrestricted File Growth both enabled for the log file? You can edit these via SSMS in "Database Properties > Files"
Is this a one time script, or regularly occurring job?
In the past, for special projects that temporarily require lots of space for the log file, I created a second log file and made it huge. Once the project is complete we then removed the extra log file.
This is an old school approach, but if you're performing an iterative update or insert operation in SQL, something that runs for a long time, it's a good idea to periodically (programmatically) call "checkpoint". Calling "checkpoint" causes SQL to write to disk all of those memory-only changes (dirty pages, they're called) and items stored in the transaction log. This has the effect of cleaning out your transaction log periodically, thus preventing problems like the one described.
Try this:
USE YourDB;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDB
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 50 MB.
DBCC SHRINKFILE (YourDB_log, 50);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDB
SET RECOVERY FULL;
GO
I hope it helps.
The following will truncate the log.
USE [yourdbname]
GO
-- TRUNCATE TRANSACTION LOG --
DBCC SHRINKFILE(yourdbname_log, 1)
BACKUP LOG yourdbname WITH TRUNCATE_ONLY
DBCC SHRINKFILE(yourdbname_log, 1)
GO
-- CHECK DATABASE HEALTH --
ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN RETURN 0 END
GO
If your database recovery model is full and you didn't have a log backup maintenance plan, you will get this error because the transaction log becomes full due to LOG_BACKUP.
This will prevent any action on this database (e.g. shrink), and the SQL Server Database Engine will raise a 9002 error.
To overcome this behavior I advise you to check this The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP that shows detailed steps to solve the issue.
I met the error: "The transaction log for database '...' is full due to 'ACTIVE_TRANSACTION' while deleting old rows from tables of my database for freeing disk space. I realized that this error would occur if the number of rows to be deleted was bigger than 1000000 in my case. So instead of using 1 DELETE statement, i divided the delete task by using DELETE TOP (1000000).... statement.
For example:
instead of using this statement:
DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
using following statement repeatedly:
DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
adding up to the answers above, I also want to mention that, if possible, u can also free up the server to fix this issue. If the server is already full due to the database overflow, u can delete some unnecessary files from the SERVER where ur DB is built upon. At least this temporarily fixes the issue and lets you to query the DB
My problem solved with multiple execute of limited deletes like
Before
DELETE FROM TableName WHERE Condition
After
DELETE TOP(1000) FROM TableName WHERECondition
The answer to the question is not deleting the rows from a table but it is the the tempDB space that is being taken up due to an active transaction. this happens mostly when there is a merge (upsert) is being run where we try to insert update and delete the transactions. The only option is is to make sure the DB is set to simple recovery model and also increase the file to the maximum space (Add an other file group). Although this has its own advantages and disadvantages these are the only options.
The other option that you have is to split the merge(upsert) into two operations. one that does the insert and the other that does the update and delete.
Here's my hero code. I've faced this problem. And use this code to fix this.
USE master;
SELECT
name, log_reuse_wait, log_reuse_wait_desc, is_cdc_enabled
FROM
sys.databases
WHERE
name = 'XX_System';
SELECT DATABASEPROPERTYEX('XX_System', 'IsPublished');
USE XX_System;
EXEC sp_repldone null, null, 0,0,1;
EXEC sp_removedbreplication XX_System;
DBCC OPENTRAN;
DBCC SQLPERF(LOGSPACE);
EXEC sp_replcounters;
DBCC SQLPERF(LOGSPACE);
Solved: As per the error the free space left the in the drive is not sufficient.
to resolve it either you can extend the drive space or move the MDF/LDF/LOG file to the drive with enough space.
Note: check the existing path from below steps
Database properties -> Select File option
enter image description here
Try this:
If possible restart the services MSSQLSERVER and SQLSERVERAGENT.

How to create copy of production SQL database?

I'm looking for best practices, efficient way. I need to copy once per month my production database to development. So I'm thinking to automate this process if possible.
The size of database around 20GB with log file (full recovery mode).
Please let me know if I need to provide more details.
Hopefully you're making regular backups of your database anyway.
So you basically just need to take the newest backup and restore it on a different server (and maybe with a different database name).
At my workplace, we are using MS SQL Server and we are doing this as well:
Our main database is backed up every evening at 9 pm (full backup).
Every day at 11 pm, a SQL Server Agent job on another server takes the newest backup from the backup folder and restores it as OurMainDatabase_Yesterday.
Here is an example script for MS SQL Server:
ALTER DATABASE OurMainDatabase_Yesterday SET SINGLE_USER WITH ROLLBACK IMMEDIATE
USE master
EXEC sp_detach_db 'OurMainDatabase_Yesterday', 'true'
-- use today's backup from the main server
declare #BackupPath as nvarchar(500)
set #BackupPath = '\\MainServer\backup\OurMainDatabase\OurMainDatabase_backup_'
+ convert(varchar(10),getdate(),112) + '2100.BAK'
RESTORE DATABASE OurMainDatabase_Yesterday
FROM DISK = #BackupPath
WITH MOVE 'OurMainDatabase_Data'
TO 'F:\Data\OurMainDatabase_Yesterday_Data.mdf',
MOVE 'OurMainDatabase_Log'
TO 'G:\Logs\OurMainDatabase_Yesterday_Log.ldf',
REPLACE
ALTER DATABASE OurMainDatabase_Yesterday SET MULTI_USER

SQL Log and ACTIVE TRANSACTIONS

I have a Web Server with SQL 2008 running a simulated SQL 2005 db, and I have a local SQL 2005 db for testing environment.
This causes me to use scripts for backup/restoring data for testing as 2008 server backups do not restore to a 2005 server.
When I run this SQL Query to reduce the size of a Table on my production Web SQL Server (2008)
DELETE FROM TickersDay
WHERE (DATEDIFF(day, TickersDay.[date], GETDATE()) >= 8)
GO
I get this message:
Msg 9002, Level 17, State 4, Line 3
The transaction log for database 'VTNET' is full. To find out why space in the log
cannot be reused, see the log_reuse_wait_desc column in sys.databases
It comes up when I publish scripts also at times.
When I run this SQL Command I get the following Result:
SELECT [name], recovery_model_desc, log_reuse_wait_desc
FROM sys.databases
RESULT:
[name] recovery_model_desc log_reuse_wait_desc
VTNET SIMPLE ACTIVE_TRANSACTION
Here are my questions and issues:
I get it.. I have a transaction statement that needs a Rollback Command
< if ##Trancount > 0 Rollback > .. but I have 100 stored procedures so before I do that....
IN THE MEANTIME... how can I eradicate this issue?? I have tried SHRINKING and I have tried backuping up the Db...
As you can see, it is in SIMPLE mode... I do not have any idea how to backup a LOG ONLY file... (have not found how to do that)...
You might be able to get around this issue simply by getting SQL NOT to process the entire table by use the index on only the dates required to be deleted. Rephrase it to be index friendly
DELETE FROM TickersDay
WHERE TickersDay.[date] <= DATEADD(day, -8, GETDATE())
GO
If you run this frequently enough (at least daily) then it only has to process 1/9th or less via an index on TickersDay([Date]) instead of having to go through the entire table if you use DATEDIFF on the field.
If that still causes this:
The transaction log for database
'VTNET' is full
You really need to increase the Log size because I suspect it is not set to autogrow and is not big enough for this operation. Either that or start looking at batching the deletes (again assuming you have an index on date, so this efficiently targets only the 100 rows), e.g.
DELETE TOP (100) FROM TickersDay
WHERE TickersDay.[date] <= DATEADD(day, -8, GETDATE())
GO
You can either loop it (while ##rowcount > 0) or just schedule it more frequently as a trickling background delete.

How to run a stored procedure every day in SQL Server Express Edition?

How is it possible to run a stored procedure at a particular time every day in SQL Server Express Edition?
Notes:
This is needed to truncate an audit table
An alternative would be to modify the insert query but this is probably less efficient
SQL Server Express Edition does not have the SQL Server Agent
Related Questions:
How can I schedule a daily backup with SQl Server Express?
Scheduled run of stored procedure on SQL Server
Since SQL Server express does not come with SQL Agent, you can use the Windows scheduler to run a SQLCMD with a stored proc or a SQL script.
http://msdn.microsoft.com/en-us/library/ms162773.aspx
I found the following mechanism worked for me.
USE Master
GO
IF EXISTS( SELECT *
FROM sys.objects
WHERE object_id = OBJECT_ID(N'[dbo].[MyBackgroundTask]')
AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[MyBackgroundTask]
GO
CREATE PROCEDURE MyBackgroundTask
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- The interval between cleanup attempts
declare #timeToRun nvarchar(50)
set #timeToRun = '03:33:33'
while 1 = 1
begin
waitfor time #timeToRun
begin
execute [MyDatabaseName].[dbo].[MyDatabaseStoredProcedure];
end
end
END
GO
-- Run the procedure when the master database starts.
sp_procoption #ProcName = 'MyBackgroundTask',
#OptionName = 'startup',
#OptionValue = 'on'
GO
Some notes:
It is worth writing an audit entry somewhere so that you can see that the query actually ran.
The server needs rebooting once to ensure that the script runs the first time.
Create a scheduled task that calls "C:\YourDirNameHere\TaskScript.vbs" on startup. VBScript should perform repeated task execution (in this example, it's a 15 minute loop)
Via command line (must run cmd.exe as administrator):
schtasks.exe /create /tn "TaskNameHere" /tr "\"C:\YourDirNameHere\TaskScript.vbs\" " /sc ONSTARTUP
Example TaskScript.vbs: This executes your custom SQL script silently using RunSQLScript.bat
Do While 1
WScript.Sleep(60000*15)
Set WshShell = CreateObject("WScript.Shell")
WshShell.RUN "cmd /c C:\YourDirNameHere\RunSQLScript.bat C:\YourDirNameHere\Some_TSQL_Script.sql", 0
Loop
RunSQLScript.bat: This uses sqlcmd to call the database instance and execute the SQL script
#echo off
sqlcmd -S .\SQLEXPRESS -i %1
If you are using Express Edition, you will need to use the Windows Scheduler or the application connecting to the server in some way.
You would use the scheduler to run sqlcmd. Here are some instructions for getting the sqlcmd working with express edition.
SQL Scheduler from http://www.lazycoding.com/products.aspx
Free and simple
Supports all versions of SQL Server 2000, 2005, and 2008
Supports unlimited SQL Server instances with an unlimited number of jobs.
Allows to easily schedule SQL Server maintenance tasks: backups, index rebuilds, integrity checks, etc.
Runs as Windows Service
Email notifications on job success and failure
Since another similar question was asked, and will likely be closed as a duplicate of this one, and there are many options not mentioned in the answers already present here...
Since you are using SQL Express you can't use SQL Server Agent. However there are many alternatives, all of which you can schedule using AT or Windows Task Scheduler depending on your operating system:
VBScript
C# command line app
batch file with SQLCMD
PowerShell
All of these languages/tools (and many others) have the capacity to connect to SQL Server and execute a stored procedure. You can also try these Agent replacements:
SQLScheduler
Express Agent
Standalone SQL Agent (beta)
The easiest way I have found to tackle this issue is to create a query that executes the stored procedure then save it. The query should look similar to this one below.
use [database name]
exec storedproc.sql
Then create a batch file with something similar to the code below in it.
sqlcmd -S servername\SQLExpress -i c:\expressmaint.sql
Then have the task scheduler execute the batch as often as you like
Another approach to scheduling in SQL Express is to use Service Broker Conversation Timers. To run a stored procedure periodically, which you can use to bootstrap a custom scheduler.
See eg Scheduling Jobs in SQL Server Express
You could use Task Scheduler to fire a simple console app that would execute the Sql statement.
As you have correctly noted, without the agent process, you will need something else external to the server, perhaps a service you write and install or Windows scheduler.
Note that with an Express installation for a local application, it is possible that the machine may not be on at the time you want to truncate the table (say you set it to truncate every night at midnight, but the user never has his machine on).
So your scheduled task is never run and your audit log gets out of control (this is a problem with SQL Server Agent as well, but one would assume that a real server would be running non-stop). A better strategy if this situation fits yours might be to have the application do it on demand when it detects that it has been more than X days since truncation or whatever your operation is.
Another thing to look at is if you are talking about a Web Application, there might be time when the application is loaded, and the operation could be done when that event fires.
As mentioned in the comment, there is sp_procoption - this could allow your SP to run each time the engine is started - the drawbacks with this method are that for long-running instances, there might be a long time between calls, and it still has issues if the engine is not running at the times you need the operation to be done.
Our company also use SQLEXPRESS and there is no SQL Agent.
Since there is no marked answer as "right" and all the solutions are quite complex I'll share what I did there. May be its really bad, but it worked great to me.
I've chosen operations of Insertion (people do) to a table that got closely the same time range i needed and made a trigger "ON INSERT" that applies needed function.