What is using all my storage in SQL server - sql

I have a SQL server database that according Properties => files, is using 47GB of disk space in the primary file group.
I am 99% sure my database is not this big.
I ran the query from this post https://www.mssqltips.com/sqlservertip/1177/determining-space-used-for-all-tables-in-a-sql-server-database/
and according to this my reserved space is only 1 GB, so how can my DB possibly be using this much space?
I have shrunk the log file but as I said above, it is the primary file that is this size

As per my comment: When a database file grows, it won't release space; most likely it got to 47GB at some point perhaps when you were doing a large load and you since deleted the data? Not releasing the space is intentional, and in fact shrinking your database is a dangerous path to go down unless you know what you're doing; as it'll easily cause things like fragmentation of your indexes.
You can find out how much space, and unused space, there is by using objects like sys.sp_spaceused; Which will detail the amount of free space in the unallocated space column.
As noted above, it's not advised you shrink the database unless you really must. Fragmentation will very likely occur during a shrink; which could have (significant) impacts on the performance of your database.

A database is made up of reserved space and of that a portion will be available space. As the db grows the available space gets smaller until an autogrowth is triggered. The autogrowth increments are set as a value or percentage. It is possible that when the database was initially created an initial value of 47023 was set. You can free up that available space back to the OS but you need to shrink the data file, not the log file, but just be cognizant of how your database is growing and how often those auto-growths are happening because there could be a performance compromise to doing that.

You may find it very useful to run two built in SSRS reports. To do so (in SQL Server Management Studio) right click on the database name and then ten selections down is a tab marked reports. Click on it and try the 'Disk Usager' report and the 'Disk Usage by Top Tables' report. They are very clear reports and you may be surprised at how large certain tables are.

Related

Why Select SQL queries on tables with blobs are slow, even when the blob is not selected?

SELECT queries on tables with BLOBs are slow, even if I don't include the BLOB column. Can someone explain why, and maybe how to circumvent it? I am using SQL Server 2012, but maybe this is more of a conceptual problem that would be common for other distributions as well.
I found this post: SQL Server: select on a table that contains a blob, which shows the same problem, but the marked answer doesn't explain why is this happening, neither provides a good suggestion on how to solve the problem.
If you are asking for a way to solve the performance drag, there are a number of approaches that you can take. Adding indexes to your table should help massively provided you aren't simply selecting the entire recordset. Creating views over the table may also assist. It's also worth checking the levels of index fragmentation on the table as this can cause poor performance and could be addressed with a regular maintenance job. The suggestion of creating a linked table to store the blob data is also a genuinely good one.
However, if your question is asking why it's happening, this is because of the fundamentals of the way MS SQL Server functions. Essentially your database, and all databases on the server and split into pages, 8kb chunks of data with a 96-byte header. Each page representing what is possible in a single I/O operation. Pages are collected contained and grouped within Exents, 64kb collections of eight contiguous pages. SQL Server therefore uses sixteen Exents per megabyte of data. There are a few differing page types, a data page type for example won't contain what are termed "Large Objects". This include the data types text, image, varbinary(max), xml data, etc... These also are used to store variable length columns which exceed 8kb (and don't forget the 96 byte header).
At the end of each page will be a small amount of free space. Database operations obviously shift these pages around all the time and free space allocations can grow massively in a database dealing with large amounts of I/O and random record access / modification. This is why free space on a database can grow massively. There are tools available within the management suite to allow you to reduce or remove free space and basically this re-organizes pages and exents.
Now, I may be making a leap here but I'm guessing that the blobs you have in your table exceed 8kb. Bear in mind if they exceed 64kb they will not only span multiple pages but indeed span multiple exents. The net result of this will be that a "normal" table read will cause massive amounts of I/O requests. Even if you're not interested in the BLOB data, the server may have to read through the pages and exents to get the other table data. This will only be compounded as more transactions make pages and exents that make up a table to become non-contiguous.
Where "Large Objects" are used, SQL Server writes Row-Overflow values which include a 24bit pointer to where the data is actually stored. If you have several columns on your table which exceed the 8kb page size combined with blobs and impacted by random transactions, you will find that the majority of the work your server is doing is I/O operations to move pages in and out of memory, reading pointers, fetching associated row data, etc, etc... All of which represents serious overhead.
I got a suggestion then, have all the blobs in a separate table with an identity ID, then only save the identity ID in your main table
it could be because - maybe SQL cannot cache the table pages as easily, and you have to go to the disk more often. I'm no expert as to why though.
A lot of people frown at BLOBS/images in databases - In SQL 2012 there is some sort of compromise where you can configure the DB to keep objects in a file structure, not in the actual DB anymore - you might want to look for that

What about performance of cursors,reindex and shrinking?

i am having recently came to know that sql server if i delete one column or modify it acquires space at backend so i need to reindex and shrink the database and i have done it and my datbase size reduced to
2.82 to 1.62
so its good like wise so now i am in a confusion
so in my mind many questions regarding this subject occurs pls help me about this one
1. So it is necessary to recreate indexes(refresh ) after particular interval
It is necessary to shrink database after particular time so performance will be up to date?
If above yes then what particular time should i refresh (Shrink) my database?
i am having no idea what should be done for disk spacing problem i am having 77000 records it takes 2.82gb dataspace which is not acceptable i am having two tables of that one only with one table nvarchar(max) so there should be minimum spaces to database can anyone help me on this one Thanks in advance
I am going to simplify things a little for you so you might want to read up about the things I talk about in my answer.
Two concepts you must understand. Allocated space vs free space. A database might be 2GB in size but it is only using 1GB so it has allocated 2GB with 1GB free space. When you shrink a database it removes the free space so free space should be about 0. Dont think smaller file size is faster. As you database grows it has to allocate space again. When you shrink the file and then it grows every so often it cannot allocate space in a contiguous fashion. This will create fragmentation of the files which slows you down even more.
With data files(.mdb) files this is not so bad but with the transaction log shrinking the log can lead to virtual log file fragmentation issues which can slow you down. So in a nutshell there is very little reason to shrink your database on a schedule. Go read about Virtual Log Files in SQL Server there are a lot of articles about it. This is a good article about shrink log files and why it is bad. Use it as a starting point.
Secondly indexes get fragmented over time. This will lead to bad performance of SELECT queries mainly but will also affect other queries. Thus you need to perform some index maintenance on the database. See this answer on how to defragment your indexes.
Update:
Well the time you rebuild indexes is not clear cut. Index rebuilds lock the index during the rebuild. Essentially they are offline for the duration. In your case it would be fast 77 000 rows is nothing for SQL server. So rebuilding the indexes will consume server resources. IF you have enterprise edition you can do online index rebuilding which will NOT lock the indexes but will consume more space.
So what you need to do is find a maintenance window. For example if your system is used from 8:00 till 17:00 you can schedule maintenance rebuilds after hours. Schedule this with SQL server agent. The script in the link can be automated to run.
Your database is not big. I have seen SQL server handle tables of 750GB without taking strain if the IO is split over several disks. The slowest part of any database server is not the CPU or the RAM but the IO pathways to the disks. This is a huge topic though. Back to your point you are storing data in NVARCHAR(MAX) fields. I assume this is large text. So after you shrink the database you see the size at 1,62GB which means that each row in your database is about 1,62/77 000 big or roughly 22Kb big. This seems reasonable. Export the table to a text file and check the size you will be suprised it will probably be larger than 1,62GB.
Feel free to ask more detail if required.

Speedup SQL Database Shrinking Process

I have to shrink and backup a database every week which is about 100+ GB of size.
Normally it takes 2-3 hours to shrink. This is quite frustrating especially when management wants this database to be deployed quickly.
My question is
1-Is there some way to shrink a huge database quickly.
2-Instead of shrinking, if I do a backup with shrink option enabled, does it do the same, like removing unnecessary pages.
Thanks
1: no, in generally you do not shrink "real" databases (of size). specially given that SQL Server backup will not back up pages in the database not used, so a backup of an empty 1000gb database is VERY small, it makes no sense to shrink. How you think people do real backups of IMPORTANT stuff (where you run a delta backup like every 15 minutes)? Generally do not use autogrow, do not use shrink on anything that has a large size.
2: moot as per 1. Do not shrink.
Why do you think you need to shrink the database in the first place? Btw., 100gb is quite small - things get runny once you hit 1000gb and larger.

How do I optimize table after delete many records

I deleted many records from my table but the DB size (Firebird) left the same. How do I decrease it?
I am looking for something similar to vacuum in PostgreS.
This is one of many pains of firebird.
Best and only effective and right way to do this - backup/restore your database using gbak
Firebird will occasionally run a sweep to remove the records from indexes etc., and regain the space for other use. In other words, as soon as the sweep has run, you will have the same performance as if the database file was smaller. You can enforce an immediate sweep, if that is what you are trying to do.
However, the size of the actual database will not shrink, no matter what, except if you do a backup and restore. If size is a problem, use the -USE_ALL_SPACE parameter for gbak, it will prevent that space is being reserved for future records, which will yield a smaller database.
From the official faq
Many users wonder why they don't get their disk space back when they
delete a lot of records from database.
The reason is that it is an expensive operation, it would require a
lot of disk writes and memory - just like doing refragmentation of
hard disk partition. The parts of database (pages) that were used by
such data are marked as empty and Firebird will reuse them next time
it needs to write new data.
If disk space is critical for you, you can get the space back by
doing backup and then restore. Since you're doing the backup to
restore right away, it's wise to use the "inhibit garbage collection"
or "don't use garbage collection" switch (-G in gbak), which will make
backup go A LOT FASTER. Garbage collection is used to clean up your
database, and as it is a maintenance task, it's often done together
with backup (as backup has to go throught entire database anyway).
However, you're soon going to ditch that database file, and there's no
need to clean it up.

Does performance of a database (SQL Server 2005) decrease if I shrink it?

Does performance of a database (SQL Server 2005) decrease if I shrink it?
What exactly happen to the mdf and ldf files when shrink is applied (Internals???)
When shrinking a database it will consume resources to shrink the DB. Where it runs into issues is when the DB needs to grow again, and assuming you have auto grow set, it will consume more resources to auto grow. Constant auto shrink (or shrink as part of maintenance plan) will cause physical disk fragmentation.
If you have auto grow enabled and it is set to the default of 1MB then constant auto grows will consume a lot of resources.
It is best practice to size your database to a size that is suitable, expected initial size plus expected growth over a period (month, year, whatever period you see fit). You should not use auto shrink or use shrink as part of a maintenance program.
You should also set your auto grow to MB (not a % of the database as when auto growing it needs to calculate the % first, then grow the database). You should also set the auto grow to a reasonable amount to ensure that it isnt going to be growing every 10 mins, try and aim for 1 or two growths a day.
You should also look at setting Instant Initialisation for your SQL Server.
Good luck,
Matt
It's important to understand that when you shrink a database, the pages are re-arranged. Pages on the end of the data file are moved to open space in the beginning of the file, with no regard to fragmentation.
A clustered index determines the physical order of data in a table. So, imagine that you just created a clustered index, which would have re-ordered the data in that table, physically. Well, then when you execute a shrink command, the data that had just been neatly ordered during the creation of the clustered index will now potentially be out of order, which will affect SQL's ability to make efficient use of it.
So, any time you do a shrink operation you have the potential of impacting performance for all subsequent queries. However, if you re-do your clustered indexes / primary keys after the shrink, you are helping to defragment much of the fragmentation that you may have introduced during the shrink operation. If performance is critical but you are also forced to do shrinks regularly, then in an ideal world you'd want to re-do your indexes after each shrink operation.
Yes it could affect performance a bit. When a database is in operation it doesn't care to much about its diskspace usage, more about efficient data retrieval/persistance.