Data Base full backup size - sql

I have a job to full backup my database daily. I have usually between 200 to 300 new records in the database daily. However, size of backup file is always the same and is 7,472 KB. Why this is always fixed while the number of records is increasing. Database Files specifications are as following:
Logical Name ------ File Type----Filegroup---Initial Size----Autogrowth/ Maxsize
DBF-----------------Rows Data----PRIMARY-----4---------------by 15 MB, Unlimited
DBF_Data------------Rows Data----FG_Data-----221-------------by 15 MB, Unlimited
DBF_Index-----------Rows Data----FG_Index----3---------------by 15 MB, Unlimited
DBF_Log-------------Log--------Not Applicable- 793--------- by 10 percent,Limited to 2097152 MB
This is the code I wrote to make daily backup
declare #Date nvarchar(10)
select #Date=cast( cast(getdate() as date) as nvarchar)
declare #Path nvarchar(100)
select #Path= 'C:\PAT Daily BK\PAT' + #Date +'.Bak'
BACKUP DATABASE PAT TO DISK =#path

Until your database expands past the initial size, all backups will be the initial size (compression aside). After it starts expanding, if the expansion amount is set to an amount greater than the space consumed by the additional records then the backup can be expected to stay the same size for multiple days, but eventually will increase when it has to expand again.
SQL Server databases occupy all space allocated whether the data warrants it or not. After it uses up that space it will follow the rules you set for expansion.
Update: The above explains the observed behavior for encrypted databases, but not for unencrypted databases, as backups of unencrypted databases will be relative to the size of the database.

Related

file stream increases data base size even though it stores on file system

we know that file stream is a mechanism that is used in order to store files that are larger than 1 mb in the file system ,in order to increase streaming performance
let we assume that we have a file a that is 1.5 gb
and we have the following table documents which has a file stream attribute
create database TryFileStream
ALTER DATABASE TryFileStream
ADD FILEGROUP [Common_Filestream] CONTAINS FILESTREAM
ALTER DATABASE TryFileStream
ADD FILE
(
NAME = [Common_FileStream_1],
FILENAME = 'C:\TryFileStream\FileStream1',
MAXSIZE = UNLIMITED
)
TO FILEGROUP [Common_Filestream];
Create Table Documents
(
[ID] UNIQUEIDENTIFIER Unique ROWGUIDCOL NOT NULL,
content VARBINARY(MAX) FILESTREAM
)
of course when we save 1.5 gb file in table documents ,file stream path
C:\TryFileStream\FileStream1 will be increased by 1.5 gb in size
but what surprised me that saving this file also increases the database size by 1.5 gb
so what will be the benefit of using file stream if size of data base continue to increase and backups operations will be impossible then
the only benefit that we got is quick data streaming for user because files are stored on the file system ,are there others ?
what should we do as small data base size is a concern ,and our system contains of alot of big files which are larger than 2 gb
When you use SSMS to get the database size, a query is fired to get the size:
select sum(cast(gs.size as float))*convert(float,8)
from sys.database_files gs
If you fire a similar query without the aggregation:
SELECT DF.type
, DF.type_desc
, DF.name
, DF.size
FROM sys.database_files AS DF
You should see a row with type=2, this is your FILESTREAM file.
In short, the FILESTREAM file is included in database size, just like the LOG and DATA files. This makes sense, since that's why you have to ADD FILEGROUP and ADD FILE.
See here for more information about sys.database_files.

TempDb growing Big on Staging server version SQL Server 2005

I have a concern regarding Tempdb growth on one of my staging servers. Current size of Tempdb has grown to nearly 42 GB in size.
Per standards we have split the data file into a primary .mdf and 7 .ndf files along a .ldf. Currently all the 7 .ndf are more than 4 GB, and primary being over 6GB.
Tried restarting but it fills up fast.
DBCC shrink file (.ndf) is also not helping as it will shrink max to some MB.
Kindly help on how to resolve this..
Thanks
Kapil..

Moving data from one table to another in Sql Server 2005

I am moving around 10 million data from one table to another in SQL Server 2005. The Purpose of Data transfer is to Offline the old data.
After some time it throws an error Description: "The LOG FILE FOR DATABASE 'tempdb' IS FULL.".
My tempdb and templog is placed in a drive (other than C drive) which has around 200 GB free. Also my tempdb size in database is set to 25 GB.
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Please let me know other factors and I cannot experiment much as I am working on Production database so can you please let me know if they changes will have some other impact.
Thanks in Advance.
You know the solution. Seems you are just moving part of data to make your queries faster.
I am agree with your solution
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Go ahead
My guess is that you're trying to move all of the data in a single batch; can you break it up into smaller batches, and commit fewer rows as you insert? Also, as noted in the comments, you may be able to set your destination database to SIMPLE or BULK-INSERT mode.
Why are you using Log file at all? Copy your data (Data and Logfile) then set the mode on SIMPLE and run the transfer again.

Why would a nightly full backup of our SQL Server database grow 30 GB over night and then shrink again the next day?

We run SQL Server 2005 and have a database that's about 100 GB (the MDF is 100GB and the LDF is 34 GB).
Our maintenance plan takes a full database back up every night. It's set up to
This backup size is usually around 95-100 GB but it all of a sudden grew to 120 GB, then 124 GB then 130 GB then back to 100 GB over 4 consecutive days.
Does anyone know what could cause this? I don't believe we added and then removed that much data in such a short period of time.
If your backup is larger than the MDF, this means you have a lot of log activity recorded too. SQL Server notes data changes that happen during a full backup and does a "mini" log backup to capture this.
I'd say that you need to change Index maintenance and backup timings

Excessive reserved space on SQL Server

We have a database running under MSDE (SQL 2000, service pack 4) that is reserving massive amounts of excess space. Using sp_spaceused for each table gives a total reserved size of 2102560 KB, a data size of 364456 KB and an unused size of 1690760 KB (i.e. reserving nearly 4 times the used space). The worst culprits are tables that are frequently written to but never deleted from (transaction logging). Generally, deletes are very infrequent and very small in terms of size and number of records.
The database files on disk are at the 2 gb limit and this is causing problems with backups etc.
I have tried DBCC SHRINKDATABASE, DBCC SHRINKFILE and DBCC REINDEX with no effect on the file size used on the disk
2 questions - How can I shrink the database file size and how can I stop SQL Server from reserving the excess space ?
Thanks
Paul
USE < DBNAME >
GO
BACKUP LOG < DBNAME > WITH TRUNCATE_ONLY
GO
DBCC SHRINKDATABASE ( < DatabaseName > )
GO
DBCC SHRINKFILE (< logfile >, 5)
GO
DBCC SHRINKFILE (< datafile >, 5)
GO
if you don't know the file paths exec sp_helpfile
what you could do is take a full db backup, reindex the db, incrementaly shrink it, then reindex it again. that way you'll have the db in it's current size.
also you should move your logging tables to another table.
Create separate files for worst culprits and place them in separate filegroups. Moving tables to another file itself will compress them. Will also make shrinkfile more effective. If needed you can create more then one file per table.
Thanks for all the suggestions. In the end, I had to create a new empty database, copy the data from the massive database and then rename the databases.
I will be keeping an eye out on the reserved sizes. Hopefully, there was something wrong with the database setup that caused this. None of our other customers using identical software / MSDE are having this problem.