TempDB Initial Size resetting even after change - sql

One of my tempdb's has a data file size of 60GB. I shrunk the file down to 2GB, then set the initial size to 2GB. The data file shrink is successful. When I go back into the db properties for tempdb, it shows initial size of 60000MB again. I've tried setting it to 4GB too and that still resets to 60000MB. This is very frustrating, since every time the service restarts, that tempdb data file is set to 60GB using up a lot of space.
Any ideas?

how did you "shrink" the filesize? If there are 60GB worth of entries, the table should "auto-size" to allow room for all entires.

Related

For Ignite Persistence, how to control maximum data size on disk?

How can I limit maximum size on disk when using Ignite Persistence? For example, my data set in a database is 5TB. I want to cache maximum of 50GB of data in memory with no more than 500GB on disk. Any reasonable eviction policy like LRU for on-disk data will work for me. Parameter maxSize controls in-memory size and I will set it to 50GB. What should I do to limit my disk storage to 500GB then? Looking for something like maxPersistentSize and not finding it there.
Thank you
There is no direct parameter to limit the complete disk usage occupied by the data itself. As you mentioned in the question, you can control in-memory regon allocation, but when a data region is full, data pages are going to be flushed and loaded on demand to/from the disk, this process is called page replacement.
On the other hand, page eviction works only for non-persistent cluster preventing it from running OOM. Personally, I can't see how and why that eviction might be implemented for the data stored on disk. I'm almost sure that other "normal" DBs like Postgres or MySQL do not have this option either.
I suppose you might check the following options:
You can limit WAL and WAL archive max sizes. Though these items are rather utility ones, they still might occupy a lot of space [1]
Check if it's possible to use expiry policies on your data items, in this case, data will be cleared from disk as well [2]
Use monitoring metrics and configure alerting to be aware if you are close to the disk limits.

Can I remove #img_bkp_cache generated by Hyper Backup in Synology?

I have a Synology with 2 To of disk space, and it is saved every day by Hyper Backup (with Smart Recycle).
But there is a file #img_bkp_cache that is growing, and takes almost 1/5th of the total disk capacity :
368G /volume2/#img_bkp_cache
1.3T /volume2/Samba
Is it safe to remove that cache file? How to do that? What can I do to shrink it otherwise?
Thank you for your help.
Here is Synology support answer (translated):
The cache image contains your remote backups index. This index is
compared to the remote index to figure out which elements have
changed. If you have several remote backups, then #img_bkp_cache will
get bigger and bigger.
The index takes roughly 5% of the total size of a backup.
It is not really safe to remove #img_bkp_cache. If you do so, the
remote backup will not be affected, but it will be impossible to manage
incremental backups.
In a nutshell, this file is important and cannot be deleted without consequences.
Note: Finally, I switched from RAID 1 to RAID 5 and doubled my storage capacity (I had a fifth volume that was unused), which "solved" the problem.

change minimum transaction log file size

I have a database that was setup with poor default transaction log sizes (by me unfortunately) now I want to change those defaults permanently. I'm using the following script which seems to work but not permanently.
use master
ALTER DATABASE [MyDB]
MODIFY FILE (NAME=MyDB_log,SIZE=10000MB,MAXSIZE=UNLIMITED,FILEGROWTH=1000MB);
When I'm working on my laptop with some large transactions I want to Shrink the log file back to this 10GB size when I'm done so I don't run out of disk space but when I do the shrink the log file it shrinks back to 8MB. Another database I have that I setup properly shrinks back to the 10GB size that I setup when I created it.
How can I force the default to be 10GB?
A lot of other posts are people trying to shink their log file to small size this is NOT that, I want to keep my local database log file to this large file size on purpose and do not wish it to ever be smaller than that.
Also, even though the script above is setting MAXSIZE to UNLIMITED when I look at the log file properties it is still capped at 2GB (which I did not set and in any case my setting to UNLIMITED doesn't work)
Why not specific the size to shrink it to?
USE {Your Database};
GO
DBCC SHRINKFILE (N'{Your Log File Name}' , 10000);
Obviously replacing values in braces ({}).

Increasing size of SQL Server MDF file

I have a 400GB MDF file and its growing by 5GB daily. At the moment, autogrow is set low at 10Mb.
If I grow the file by 10GB, I am guessing this will stall SQL Server and cause lots of timeouts.
Whats the recommended approach here>
Autogrow or take the hit and grow the file nightly?
The recommended approach is to NEVER grow it outside maintenance periods. As any sensibly large and busy sql database lives on dedicated storage ANYWAY - you can pre-allocate all the available space during a maintenance window.
MDF file growth is instantaneous with instant file initialization. SQL Server will not stall if you grow the file by 10 TB, let alone 10 GB. I would recommend you grow the database to the size you expect it to be in the next 12 months in a single operation. There's no reason to wait for it to expand your file size repeatedly throughout the day or even each night.

Is there any logic in just maxing tempdb and never having it change size?

The reason I ask it we have a dedicated RAID10 array with ~150GB for the tempdb (the "t" drive). It is only used for storing tempdb. The t drive isn't used by by SQL Server or any other process for anything else.
Our DBA has tempdb setup with 15GB initial size and autogrow 20% increments. Everytime the server starts it resized to 15GB and then over the course of the day grows to ~80GB (on average). Now IT is looking into making initial size larger say 30 or 40GB but given the drive is ONLY used for tempdb my thinking is why not "max it" right away.
Is the any negative effect to simply create 4 data files in the primary group for tempdb give them each an initial size of 30GB (120GB total), turn autogrow off and be done with it?
Are there any limits on SQL Server ability to span multiple tempdb data files in one query? i.e. will it cause problems if the tempdb has say 70GB total free but the file used by one process is full (30 of 30GB used)?
I would size them to about 100GB and leave autogrow on, this way you don't have to wait for it to grow every time, I would also add multiple files
Is the any negative effect to simply
create 4 data files in the primary
group for tempdb give them each an
initial size of 30GB, turn autogrow
off and be done with it?
Sounds like a good plan to me, however I would leave autogrow on just in case someone decides to do a sort operation on a big table which doesn't have an index on that column
See also here: http://technet.microsoft.com/en-us/library/cc966534.aspx
It is recommended to have .25 to 1
data files (per filegroup) for each
CPU on the host server.
This is especially true for TEMPDB
where the recommendation is 1 data
file per CPU.
Dual core counts as 2 CPUs; logical
procs (hyperthreading) do not.
We have found it very useful to create large TempDB data and log files. Any actions that limit server OS activities such as resizing TempDB increase server efficiencies. We have a 16 processor machine with 113 GB dedicated to TempDB data space. This machine is dedicated to large SSIS ETL processes, thus resulting in mass data operations.
The bulk of our ETL operations spawn up to 4 SQL threads. After initially configuring a TempDB file for each processor (16), we quickly realized via performance monitoring that our configuration was forcing SQL\windows to unnecessarily span the multiple TempDB files. We settled on 5 larger TempDB data files and realized performance improvements. We have since moved on to a 24 processor box and are using 8 TempDB files.
Please note that this is a large data migration server; I’m sure transaction-oriented systems would still benefit from the recommended 1-1 processor to TempDB file configuration. It should also be noted that having a large increase % on a TempDB file may force a critical transaction to take the windows operation hit and thus may not be appropriate for your specific application.