TempDb growing Big on Staging server version SQL Server 2005 - sql-server-2005

I have a concern regarding Tempdb growth on one of my staging servers. Current size of Tempdb has grown to nearly 42 GB in size.
Per standards we have split the data file into a primary .mdf and 7 .ndf files along a .ldf. Currently all the 7 .ndf are more than 4 GB, and primary being over 6GB.
Tried restarting but it fills up fast.
DBCC shrink file (.ndf) is also not helping as it will shrink max to some MB.
Kindly help on how to resolve this..
Thanks
Kapil..

Related

Approximate disk space consumption of rows on SQL Server

I'd like to understand, what causes the size of a SQL Server 12 database. The mdf has 21.5 GB. Using the "Disk Usage by Top Tables" report in SQL Server Management Studio, I can see that 15.4 GB are used by the "Data" of one table. This table has 1,691 rows of 4 columns (int, varchar(512), varchar(512), image). I assume the image column is responsible for most of the consumption. But
Select (sum(datalength(<col1>)) + ... )/1024.0/1024.0 as MB From <Table>
only gives 328.9 MB.
What might be the reason behind this huge discrepancy?
Additional information:
For some rows the image column is updated regularly.
This is a screenshot of the report:
If we can trust it, indices or unused space should not be the cause.
Maybe you are using a lot of indexes per table, these will all add up. Maybe your auto-grow settings are wrong.
The reason was a long running transaction on another unrelated database (!) on the same SQL Server instance. The read committed snapshot isolation level filled the version store. Disconnecting the other application reduced the memory usage to a sensible amount.

How to create multiple files of SQL Server database file to manage disk space

I have around 500GB disk space on D drive. I created my database file on D Drive. After few years of transactions, the disk space is almost full (around 25 MB disk space remaining).
I have around 300GB disk space remaining on E Drive. Can I use disk space available on E Drive for the existing database which will enable me to grow my database up to 800GB (500 GB on D Drive & 300 GB on E Drive).
Any help would be really appreciable.
Thanks.
First choice would be to purchase bigger driver and to move entire file to the new drive.
Create new file group and add files on the E: Drive (not recommended, to split your table data across multiple drives.) Creating file groups is very simple process, you can do it with SSMS (not sure if it was the same in 2005) you would just right click on Database, go to properties and go to File Groups. Or you can use T-SQL ALTER DATABASE to add files. Full syntax at http://technet.microsoft.com/en-us/library/ms174269(v=sql.90).aspx
Create new file group and add files on E: Drive, once complete identify large tables and move to new file group. This will keep entire tables on same drive to keep good read/write performance. This question How do i move a table to a particular FileGroup in SQL Server 2008 already has syntax for moving tables to another file group.
Note: SQL-Server 2005 has reached end of life.
We would like to remind all customers that Mainstream Support for SQL Server 2005 Service Pack 3 and SQL Server 2005 Service Pack 4 will end on April 12, 2011, and Service Pack Support for SQL Server 2008 Service Pack 1 will end on October 11, 2011.
http://blogs.msdn.com/b/sqlreleaseservices/archive/2011/01/27/end-of-mainstream-support-for-sql-server-2005-and-end-of-service-pack-support-for-sql-server-2008-sp1.aspx
Several other tips:
Check size of your tempdb and transaction logs, you might be able to reduce some space there.
Archive data you don't need.
Keep data and log files on separate drives.
Create more than multiple data files for database to reduce allocation contention more about it here(http://support.microsoft.com/kb/2154845).

Moving data from one table to another in Sql Server 2005

I am moving around 10 million data from one table to another in SQL Server 2005. The Purpose of Data transfer is to Offline the old data.
After some time it throws an error Description: "The LOG FILE FOR DATABASE 'tempdb' IS FULL.".
My tempdb and templog is placed in a drive (other than C drive) which has around 200 GB free. Also my tempdb size in database is set to 25 GB.
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Please let me know other factors and I cannot experiment much as I am working on Production database so can you please let me know if they changes will have some other impact.
Thanks in Advance.
You know the solution. Seems you are just moving part of data to make your queries faster.
I am agree with your solution
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Go ahead
My guess is that you're trying to move all of the data in a single batch; can you break it up into smaller batches, and commit fewer rows as you insert? Also, as noted in the comments, you may be able to set your destination database to SIMPLE or BULK-INSERT mode.
Why are you using Log file at all? Copy your data (Data and Logfile) then set the mode on SIMPLE and run the transfer again.

Why would a nightly full backup of our SQL Server database grow 30 GB over night and then shrink again the next day?

We run SQL Server 2005 and have a database that's about 100 GB (the MDF is 100GB and the LDF is 34 GB).
Our maintenance plan takes a full database back up every night. It's set up to
This backup size is usually around 95-100 GB but it all of a sudden grew to 120 GB, then 124 GB then 130 GB then back to 100 GB over 4 consecutive days.
Does anyone know what could cause this? I don't believe we added and then removed that much data in such a short period of time.
If your backup is larger than the MDF, this means you have a lot of log activity recorded too. SQL Server notes data changes that happen during a full backup and does a "mini" log backup to capture this.
I'd say that you need to change Index maintenance and backup timings

Slow MS SQL 2000, lots of timeouts. How can I improve performance?

I found this script on SQL Authority:
USE MyDatabase
GO
EXEC sp_MSforeachtable #command1=“print ’?' DBCC DBREINDEX (’?', ’ ’, 80)”
GO
EXEC sp_updatestats
GO
It has reduced my insert fail rate from 100% failure down to 50%.
It is far more effective than the reindexing Maintenance Plan.
Should I also be reindexing master and tempdb?
How often? (I write to this database 24 hrs a day)
Any cautions that I should be aware of?
RAID 5 on your NAS? That's your killer.
An INSERT is logged: it writes to the .LDF (log) file. This file is 100% write (well, close enough).
This huge write to read ratio generates a lot of extra writes per disk in RAID 5.
I have an article in work (add later): RAID 5 writes 4 times as much per disk than RAID 10 in 100% write situations.
Solutions
You need to split your data and log files for your database at least.
Edit: Clarified this line:
The log files need go to RAID 1 or RAID 10 drives. It's not so important for data (.MDF) files. Log files are 100% write so benefit from RAID 1 or RAID 10.
There are other potential isues too such as fragmented file system or many Vlog segments (depending on how your database has grown), but I'd say your main issue is RAID 5.
For a 3TB DB, I'd also stuff as much RAM as possible in (32GB if Windows Advanced/Enterprise) and set PAE/AWE etc. This will mitigate some disk issues but only for data caching.
Fill factor 85 or 90 is the usual rule of thumb. If your inserts are wide and not strictly monotonic (eg int IDENTITY column) then you'll have lots of page splits with anything higher.
I'm not the only one who does not like RAID 5: BAARF
Edit again:
Look for "Write-Ahead Logging (WAL) Protocol" in this SQL Server 2000 article. It's still relevant: it explains why the log file is important.
I can't find my article on how RAID 5 suffers compared to RAID 10 under 100% write loads.
Finally, SQL Server does I/O in 64k chunks: so format NTFS with 64k clusters.
This could be anything at all. Is the system CPU bound? IO bound? Is the disk too fragemented? Is the system paging too much? Is the network overloaded?
Have you tuned the indexes? I don't recall if there was an index tuning wizard in 2000, but at the least, you could run the profiler to create a workload that could be used by the SQL Server 2005 index tuning wizard.
Check out your query plans also. Some indexes might not be getting used or the SQL could be wholly inefficient.
What table maintenance do you have?
is all the data in the tables relevant to todays processing?
Can you warehouse off some data?
What is your locking like? Are you locking the whole table?
EDIT:
The SQL profiler shows all interactions with the SQL Server. It should be a DBAs lifeblood.
Thanks for all of the help. I'm not there yet, but getting better.
I can't do much about hardware constraints.
All available RAM is allowed to SQL
Fillfactor is set at 95
Using profiler, an hour's trace offered index tuning with suggested increase of 27% efficiency.
As a result, I doubled the amount of successful INSERTS. Now only 1 out of 4 are failing.
Tracing now and will tune after to see if it gets better.
Don't understand locking yet.
For those who maintain SQL Server as a profession, am I on the right track?