I deleted many records from my table but the DB size (Firebird) left the same. How do I decrease it?
I am looking for something similar to vacuum in PostgreS.
This is one of many pains of firebird.
Best and only effective and right way to do this - backup/restore your database using gbak
Firebird will occasionally run a sweep to remove the records from indexes etc., and regain the space for other use. In other words, as soon as the sweep has run, you will have the same performance as if the database file was smaller. You can enforce an immediate sweep, if that is what you are trying to do.
However, the size of the actual database will not shrink, no matter what, except if you do a backup and restore. If size is a problem, use the -USE_ALL_SPACE parameter for gbak, it will prevent that space is being reserved for future records, which will yield a smaller database.
From the official faq
Many users wonder why they don't get their disk space back when they
delete a lot of records from database.
The reason is that it is an expensive operation, it would require a
lot of disk writes and memory - just like doing refragmentation of
hard disk partition. The parts of database (pages) that were used by
such data are marked as empty and Firebird will reuse them next time
it needs to write new data.
If disk space is critical for you, you can get the space back by
doing backup and then restore. Since you're doing the backup to
restore right away, it's wise to use the "inhibit garbage collection"
or "don't use garbage collection" switch (-G in gbak), which will make
backup go A LOT FASTER. Garbage collection is used to clean up your
database, and as it is a maintenance task, it's often done together
with backup (as backup has to go throught entire database anyway).
However, you're soon going to ditch that database file, and there's no
need to clean it up.
Related
I have a SQL server database that according Properties => files, is using 47GB of disk space in the primary file group.
I am 99% sure my database is not this big.
I ran the query from this post https://www.mssqltips.com/sqlservertip/1177/determining-space-used-for-all-tables-in-a-sql-server-database/
and according to this my reserved space is only 1 GB, so how can my DB possibly be using this much space?
I have shrunk the log file but as I said above, it is the primary file that is this size
As per my comment: When a database file grows, it won't release space; most likely it got to 47GB at some point perhaps when you were doing a large load and you since deleted the data? Not releasing the space is intentional, and in fact shrinking your database is a dangerous path to go down unless you know what you're doing; as it'll easily cause things like fragmentation of your indexes.
You can find out how much space, and unused space, there is by using objects like sys.sp_spaceused; Which will detail the amount of free space in the unallocated space column.
As noted above, it's not advised you shrink the database unless you really must. Fragmentation will very likely occur during a shrink; which could have (significant) impacts on the performance of your database.
A database is made up of reserved space and of that a portion will be available space. As the db grows the available space gets smaller until an autogrowth is triggered. The autogrowth increments are set as a value or percentage. It is possible that when the database was initially created an initial value of 47023 was set. You can free up that available space back to the OS but you need to shrink the data file, not the log file, but just be cognizant of how your database is growing and how often those auto-growths are happening because there could be a performance compromise to doing that.
You may find it very useful to run two built in SSRS reports. To do so (in SQL Server Management Studio) right click on the database name and then ten selections down is a tab marked reports. Click on it and try the 'Disk Usager' report and the 'Disk Usage by Top Tables' report. They are very clear reports and you may be surprised at how large certain tables are.
I have a database which is around 20GB in size but actually free space is around 17GB. This has happened because I have moved out quite a few audit tables that were large in size to another database altogether.
I mistakenly tried to perform a shrink database whilst in business hours but didn't realise the time it would take it to complete this so managed to stop it.
I've now done some research on this and I've read articles where shrinking a database shouldnt be done because it can cause "massive" index fragmentation. I'm no SQL guru but this does ring alarm bells.
People have suggested using a shrink with truncate only.
Are there any sql experts out there that can help me out with the right thing to do here??
Shrinking a database after you remove a large amount of data in a one-time operation is proper. You want to leave plenty of free space, as re-growing can be expensive. And you want to rebuild indexes after shrinking, to remove the fragmentation.
But here the database is only 20GB, so shrinking it won't really solve any problems. If you wanted to shrink it to 6GB or so, that would be fine. But I would just leave it at 20GB.
i am having recently came to know that sql server if i delete one column or modify it acquires space at backend so i need to reindex and shrink the database and i have done it and my datbase size reduced to
2.82 to 1.62
so its good like wise so now i am in a confusion
so in my mind many questions regarding this subject occurs pls help me about this one
1. So it is necessary to recreate indexes(refresh ) after particular interval
It is necessary to shrink database after particular time so performance will be up to date?
If above yes then what particular time should i refresh (Shrink) my database?
i am having no idea what should be done for disk spacing problem i am having 77000 records it takes 2.82gb dataspace which is not acceptable i am having two tables of that one only with one table nvarchar(max) so there should be minimum spaces to database can anyone help me on this one Thanks in advance
I am going to simplify things a little for you so you might want to read up about the things I talk about in my answer.
Two concepts you must understand. Allocated space vs free space. A database might be 2GB in size but it is only using 1GB so it has allocated 2GB with 1GB free space. When you shrink a database it removes the free space so free space should be about 0. Dont think smaller file size is faster. As you database grows it has to allocate space again. When you shrink the file and then it grows every so often it cannot allocate space in a contiguous fashion. This will create fragmentation of the files which slows you down even more.
With data files(.mdb) files this is not so bad but with the transaction log shrinking the log can lead to virtual log file fragmentation issues which can slow you down. So in a nutshell there is very little reason to shrink your database on a schedule. Go read about Virtual Log Files in SQL Server there are a lot of articles about it. This is a good article about shrink log files and why it is bad. Use it as a starting point.
Secondly indexes get fragmented over time. This will lead to bad performance of SELECT queries mainly but will also affect other queries. Thus you need to perform some index maintenance on the database. See this answer on how to defragment your indexes.
Update:
Well the time you rebuild indexes is not clear cut. Index rebuilds lock the index during the rebuild. Essentially they are offline for the duration. In your case it would be fast 77 000 rows is nothing for SQL server. So rebuilding the indexes will consume server resources. IF you have enterprise edition you can do online index rebuilding which will NOT lock the indexes but will consume more space.
So what you need to do is find a maintenance window. For example if your system is used from 8:00 till 17:00 you can schedule maintenance rebuilds after hours. Schedule this with SQL server agent. The script in the link can be automated to run.
Your database is not big. I have seen SQL server handle tables of 750GB without taking strain if the IO is split over several disks. The slowest part of any database server is not the CPU or the RAM but the IO pathways to the disks. This is a huge topic though. Back to your point you are storing data in NVARCHAR(MAX) fields. I assume this is large text. So after you shrink the database you see the size at 1,62GB which means that each row in your database is about 1,62/77 000 big or roughly 22Kb big. This seems reasonable. Export the table to a text file and check the size you will be suprised it will probably be larger than 1,62GB.
Feel free to ask more detail if required.
I have to shrink and backup a database every week which is about 100+ GB of size.
Normally it takes 2-3 hours to shrink. This is quite frustrating especially when management wants this database to be deployed quickly.
My question is
1-Is there some way to shrink a huge database quickly.
2-Instead of shrinking, if I do a backup with shrink option enabled, does it do the same, like removing unnecessary pages.
Thanks
1: no, in generally you do not shrink "real" databases (of size). specially given that SQL Server backup will not back up pages in the database not used, so a backup of an empty 1000gb database is VERY small, it makes no sense to shrink. How you think people do real backups of IMPORTANT stuff (where you run a delta backup like every 15 minutes)? Generally do not use autogrow, do not use shrink on anything that has a large size.
2: moot as per 1. Do not shrink.
Why do you think you need to shrink the database in the first place? Btw., 100gb is quite small - things get runny once you hit 1000gb and larger.
I have a system that needs to snapshot specific tables at certain points in time.
At present the process that takes the snapshot queries the data in the table, and puts the output into a Temp table (like a stage table not an in memory table).
Many of these processes can be running at the same time in parallel (100+ per hour). And the tables to be copied can run into GBs worth of data.
I am considering the use of database snapshots, so each process can take its own snapshot and work with it.
What are the pros and cons of this approach?
Is there a better way to approach this?
I would certainly consider using snapshots, but there are a couple of things you need to consider before you decide one way or the other:
Snapshots use sparse files to store data, effectively only storing the changes to each table from the time of its creation. The more rows in your table that change, the smaller the disk saving over a simple replication.
Snapshots happen to the entire database, rather than just one table. Therefore, if your DB is huge, and need 100+ copies of your DB per hour, that's going to cost you a lot of disk space and performance. On the other hand, if you're happy with one snapshot per hour for all your processes and the data doesn't change too much, snapshots may be exactly what you need.
Snapshots will have a performance overhead on your original DB, as the snapshots need to be kept up to date with what data has changed. If you're already tight on performance, this may be an issue.
There are certainly some other things to consider, but that's not a bad starting point. Books OnLine has a fairly detailed article regarding the use of Snapshots, which I would read before deciding. http://msdn.microsoft.com/en-us/library/ms175158%28v=SQL.90%29.aspx There's also a section in there on limitations: http://msdn.microsoft.com/en-us/library/ms189940%28v=SQL.90%29.aspx
Hope this helps.
If you need to take a snapshot of the data for backup/recovery purposes, I suggest you utilize backup features of SQL Server (i.e., full, transaction log backups).
If your goal is to capture data from some table for historical purposes, you could implement a bunch of sql jobs running something like:
select *
into archive_table
from normal_table
Make sure you turn off transaction log (i.e., set database to "simple") right before you do this as it will drastically speed up the process.