how to shrink a very big log file in SQL server 2016 - sql-server-2016

I have a warehouse with a 800 GB log file.now I want to shrink or another solution to reduce the disk space occupied by the Log.ldf.
I tried shrink file in several ways.I got a full backup , transaction log back up , changed recovery mode , ran dbcc command but none of them affected log file volume in disk , detach database and then deleted the log file but because of memory-optimize file container I faced error while I attempted to attach it again (I read SQL server will automatically add a log file but apparently not when database has a memory_optimize file )
after all these solutions my log file is still 800 GB and I don't know what to do to free disk space used by log file.
Is there any suggestion ? Or do I forget to do sth in my approaches ?

Related

Why Database backup size differs when backing up from Query and SSMS?

I am confused with the size of the file I backup with SSMS and Query.
If I create a file from SSMS in its default folder something like "C:\Program Files\Microsoft SQL Server\MSSQL14.NAMEDINSTANCE\MSSQL\Backup" the outfile say Db1.bak is about 198292 KB
Same database if I backup with the query "backup database Db1 to disk='D:\Db1.bak' the file size is just 6256 KB
Sometimes the other database say Db2 gives the same filesize i.e 6256 KB(Both Db1 and Db2 have identical(same) schemas just data in it are different.)
And backup with SSMS gives 33608 KB which seems satisfactory.
I also tried verifying all database in SSMS like this RESTORE VERIFYONLY FROM DISK = 'D:\BACKUP\Db1.bak'
GO and result gives valid in every database check.
I also tried deleting Db1 from SSMS and restoring the less KB file and checked some data of few tables (Not All) and it seems showing all data in tables properly but the filesize dissatisfies me.
Thank You.
I suspect that,like initially mentioned, you have compression on my
default, and using the GUI, with the settings is not making use of
that (and that if you selected to Compress in the GUI, you'd get a
similar size)
If the server option backup compression default is on, even if you don't mention it in your backup command, compression will be applied. So in both cases there would be compressed backup. But it's easy to see, just run this command for both backups:
restore headeronly
from disk = 'here_the_full_path_with_filename';
In the 5th column you'll get the flag if your backup is compressed.
But the cause of this difference is another one, and you'll see it when run restore headeronly: you made multiple backups to the same file.
You used backup command with noinit from SSMS, and the same file name, so now this file contains more than one backup, and restore headeronly will show them all.

Why will my SQL Transaction log file not auto-grow?

The Issue
I've been running a particularly large query, generating millions of records to be inserted into a table. Each time I run the query I get an error reporting that the transaction log file is full.
I've managed to get a test query to run with a reduced set of results and by using SELECT INTO instead of INSERT into as pre built table. This reduced set of results generated a 20 gb table, 838,978,560 rows.
When trying to INSERT into the pre built table I've also tried using it with and without a Cluster index. Both failed.
Server Settings
The server is running SQL Server 2005 (Full not Express).
The dbase being used is set to SIMPLE for recovery and there is space available (around 100 gb) on the drive that the file is sitting on.
The transaction log file setting is for File Growth of 250 mb and to a maximum of 2,097,152 mb.
The log file appears to grow as expected till it gets to 4729 mb.
When the issue first appeared the file grow to a lower value however i've reduced the size of other log files on the same server and this appears to allow this transaction log file grow further by the same amount as the reduction on the other files.
I've now run out of ideas of how to solve this. If anyone has any suggestion or insight into what to do it would be much appreciated.
First, you want to avoid auto-growth whenever possible; auto-growth events are HUGE performance killers. If you have 100GB available why not change the log file size to something like 20GB (just temporarily while you troubleshoot this). My policy has always been to use 90%+ of the disk space allocated for a specific MDF/NDF/LDF file. There's no reason not to.
If you are using SIMPLE recovery SQL Server is supposed manage the task of returning unused space but sometimes SQL Server does not do a great job. Before running your query check the available free log space. You can do this by:
right-click the DB > go to Tasks > Shrink > Files.
change the type to "Log"
This will help you understand how much unused space you have. You can set "Reorganize pages before releasing unused space > Shrink File" to 0. Moving forward you can also release unused space using CHECKPOINT; this may be something to include as a first step before your query runs.

Backup a database on a HDD with a different sector size

In our development environment we have long been using a particular backup and restore script for each of our products through various SQL Server versions and different environment configurations with no issues.
Recently we have upgraded to SQL Server 2012 as our standard development server with SQL Compatibility Level 2005 (90) to maintain support with legacy systems. Now we find that on one particular dev's machine we get the following error when attempting to backup the database:
Cannot use the backup file 'D:\MyDB.bak' because it was
originally formatted with sector size 512 and is now on a device with
sector size 4096. BACKUP DATABASE is terminating abnormally.
With the command being:
BACKUP DATABASE MyDB TO DISK = N'D:\MyDB.bak' WITH INIT , NOUNLOAD , NAME = N'MyDB backup', NOSKIP , STATS = 10, NOFORMAT
The curious thing is that neither the hardware nor partitions on that dev's machine have changed, even though their sector size is different this has not previously been an issue.
From my research (i.e. googling) there is not a lot on this issue apart from the advice to use the WITH BLOCKSIZE option, but that then gives me the same error message.
With my query being:
BACKUP DATABASE MyDB TO DISK = N'D:\MyDB.bak' WITH INIT , NOUNLOAD , NAME = N'MyDB backup', NOSKIP , STATS = 10, NOFORMAT, BLOCKSIZE = 4096
Can anyone shed some light on how I can backup and restore a database to HDDs with different sector sizes?
All you have to do is back it up with a different name.
This issue is caused by different sector sizes used by different drives.
You can fix this issue by changing your original backup command to:
BACKUP DATABASE MyDB TO DISK = N'D:\MyDB.bak' WITH INIT , NOUNLOAD , NAME = N'MyDB backup', STATS = 10, FORMAT
Note that I've changed NOFORMAT to FORMAT and removed NOSKIP.
Found a hint to resolving this issue in the comment section of the following blog post on MSDN:
SQL Server–Storage Spaces/VHDx and 4K Sector Size
And more information regarding 4k sector drives:
http://blogs.msdn.com/b/psssql/archive/2011/01/13/sql-server-new-drives-use-4k-sector-size.aspx
Just remove the existing .bak file and re-run.
I ran into the same issue as the OP. On a dev machine, we had a PowerShell script that backed up databases from remote database servers and stored the backup files locally. The script overwrote the same backup files, over and over, and the script had been working fine for a couple years. Then I cloned the spinning media drive to an SSD in the dev machine. Suddenly, we were getting the same error as the OP:
Backup-SqlDatabase : System.Data.SqlClient.SqlError: Cannot use the backup file '\DevMachine\Back-Up\Demo.bak' because it was
originally formatted with sector size 4096 and is now on a device with
sector size 512.
Sure, I could delete all of the existing .bak files to fix the problem. But what if it happens, again? I wanted a command line solution that consistently worked.
Here's our original code:
Backup-SqlDatabase -ServerInstance "DBServer1" -Database "Demo" -BackupFile "\\DevMachine\Back-Up\Demo.bak" -BackupAction Database -CopyOnly -CompressionOption On -ConnectionTimeout 0 -Initialize -Checksum -ErrorAction Stop
After some fiddling around, I changed it to the following to fix the problem:
Backup-SqlDatabase -ServerInstance "DBServer1" -Database "Demo" -BackupFile "\\DevMachine\Back-Up\Demo.bak" -BackupAction Database -CopyOnly -CompressionOption On -ConnectionTimeout 0 -Initialize -Checksum -FormatMedia -SkipTapeHeader -ErrorAction Stop
Basically, the following options were added to fix the issue:
-FormatMedia -SkipTapeHeader
Note that if you read the documentation for the Backup-SqlDatabase cmdlet, -FormatMedia is listed as only applying to tapes and not to disk backups. However, it appears to do the job of blowing away the existing backup file when backing up to disk.
- https://learn.microsoft.com/en-us/powershell/module/sqlps/backup-sqldatabase
I found that if I used the -FormatMedia option by itself, it generated the following error:
Backup-SqlDatabase : The FormatMedia and SkipTapeHeader properties
have conflicting settings.
I fixed the second error by adding an additional option: -SkipTapeHeader. Clearly that's also intended for tape backups, but it worked.
We had the same problem going from 2005 to 2008. The problem was that we were trying to use the same backup file in 2008 that we used in 2005 (appending backups together into 1 file).
We changed the script to backed up to a different file and the problem was resolved. I would imagine that moving/deleting the old file would have the same affect
I had the same problem, but just with restore. I got this error in Management studio: "Specified cast is not valid. (SqlManagerUI)" ...and this error in query: "SQL Server cannot process this media family."
Then I done a simple thing: I coped backup set into the default backup folder. For example: C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS2008R2\MSSQL\Backup\bckup.bak
It worked. I restored it from this place. :-S It looks like the SQL is sector-size sensitive.
Most probably that Michael's answer is the solution that you all need.
You just have another backup file with the same name and path.
https://stackoverflow.com/a/32662406/7841170
All you have to do is back it up with a different name.
In my case I was trying to overwrite the existing DB backup which had the same file name. I just deleted the existing file and saved the new backup file with same name.

SQL deleted data stored in log file or permantly deleted from database

After deleting data from SQL database, deleted data stored in log file or is it permanently deleted from database?
When deleting data from database log file get increased the size, why?
After shrink the database it will reduced the file size
edit 1::
in your fifth row you desc why log file incresed but after completing the delete command why it is not free the disk space ,the records maintain as it is in log file . is it possible to delete data without storing into the log file ? because i deleted near about 2 millions data from the file it will incresed the 16GB space of the disk
So, as you described the log behavior - it is reduces size after shrink - you use Simple recovery model of your DB.
And there is no any copy of deleted data left in log file
If you ask - how to recover that data - there is no any conventional way.
If you ask for data security and worried about deleted data left somewhere in the DB - yes, there are - see ghosted records. And obviously some data can be recovered by third party tools from both DB file types - data and log.
The db log is increased by size because it holds all the data being deleting until DELETE command finishes, because if it fails - sql server should restore all the partially deleted data due to atomicity guarantee.
Additional answers:
No, it is not possible to delete data without storing into the log file
Since log file already grown up in does not shrink automatically. To reduce size of files in DB you should perform some shrink operations, which is strongly not recommended on production environment
try to delete in smaller chunks, see the example
instead of deleting all the data like this:
DELETE FROM YourTable
delete in small chunks:
WHILE 1 = 1
BEGIN
DELETE TOP (10000) FROM YourTable
IF ##ROWCOUNT = 0 BREAK
END

empty sql server 2008 db backup file is very big

Im deploying my db, i more or less emptied the db(data) and then created a backup.
the .bak file is over 100mb.
why is this?
how do i get it down?
im using sql server 2008
EDIT
When you back up, please note that SQL Server backup files can contain multiple backups. It does not overwrite by default. If you choose the same backup file and do not choose the overwrite option, it simply adds another backup to the same file. So your file just keeps getting larger.
Run this and all will be revealed:
select dpages *8 [size in kbs]
from sysindexes
where indid <= 1
order by 1 desc
You can also..
Do two backups in a row to have the 2nd backup contain minimal log data. The first backup will contain logged activity so as to be able to recover. The 2nd one would no longer contain them.
There is also an issue with leaked Service Broker handles if you use SSSB in your database with improper code, but if this is the case, the query above will reveal it.
To get the size down, you can use WITH COMPRESSION, eg.
backup database mydb to disk = 'c:\tempdb.bak' with compression
It will normally bring it down to about 20% the size. As Martin has commented above, run also
exec sp_spaceused
To view the distribution of data/logs. From what you are saying, 1.5 MB for first table... down to 8kB on the 45th row, that accounts for maybe tens of MB, so the rest could be in the log file.