We have a log table in our production database which is hosted on Azure. The table has grown to about 4.5 million records and now we just want to delete all the records from that log table.
I tried running
Delete from log_table
And I also tried
Delete top 100 from log_table
Delete top 20 from log_table
When I run the queries, database usage jumps to 100% and the query just hangs. I believe this is because of the large number of records in the table. Is there a way we can overcome the issue?
To delete all rows in a big table, you can use the truncate table command.
This command is used to remove all rows from a table, and it is a faster operation than using the DELETE command
Ex:
TRUNCATE TABLE table_name;
In an Azure SQL database, you have several options to choose from, for you to control the size of your database and its log file. First of all, let's start with some defenitions:
Data space used is the amount of space used to store database
data. Generally, space used increases (decreases) on inserts
(deletes). In some cases, the space used does not change on inserts
or deletes depending on the amount and pattern of data involved in
the operation and any fragmentation. For example, deleting one row
from every data page does not necessarily decrease the space used.
Data space allocated is the amount of formatted file space made
available for storing database data. The amount of space allocated
grows automatically, but never decreases after deletes. This behavior
ensures that future inserts are faster since space does not need to
be reformatted.
Data space allocated but unused represents the maximum amount of free
space that can be reclaimed by shrinking database data files.
Data max size is the maximum amount of space that can be used for
storing database data. The amount of data space allocated cannot grow
beyond the data max size.
In Azure SQL Database, to shrink files you can use either DBCC SHRINKDATABASE or DBCC SHRINKFILE commands:
DBCC SHRINKDATABASE shrinks all data and log files in a database using a single command. The command shrinks one data file at a time, which can take a long time for larger databases. It also shrinks the log file, which is usually unnecessary because Azure SQL Database shrinks log files automatically as needed.
DBCC SHRINKFILE command supports more advanced scenarios:
It can target individual files as needed, rather than shrinking all files in the database.
Each DBCC SHRINKFILE command can run in parallel with other DBCC SHRINKFILE commands to shrink multiple files at the same time and reduce the total time of shrink, at the expense of higher resource usage and a higher chance of blocking user queries, if they are executing during shrink.
If the tail of the file does not contain data, it can reduce allocated file size much faster by specifying the TRUNCATEONLY argument. This does not require data movement within the file.
Now going on to some useful SQL queries:
-- Shrink database data space allocated.
DBCC SHRINKDATABASE (N'database_name');
-- Review file properties, including file_id and name values to reference in shrink commands
SELECT file_id,
name,
CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8 / 1024. AS space_used_mb,
CAST(size AS bigint) * 8 / 1024. AS space_allocated_mb,
CAST(max_size AS bigint) * 8 / 1024. AS max_file_size_mb
FROM sys.database_files
WHERE type_desc IN ('ROWS','LOG');
-- Shrink database data file named 'data_0` by removing all unused at the end of the file, if any.
DBCC SHRINKFILE ('data_0', TRUNCATEONLY);
GO
When it comes to the log file of a database, you can use the following queries:
-- Shrink the database log file (always file_id 2), by removing all unused space at the end of the file, if any.
DBCC SHRINKFILE (2, TRUNCATEONLY);
... and to set the database log file to automatically shrink itself and keep a certain amount of space, you may use:
-- Enable auto-shrink for the current database.
ALTER DATABASE CURRENT SET AUTO_SHRINK ON;
For reference purposes, I didn't get this information myself, but extracted it from this official article in Microsoft Docs
Related
We have a 60 GB production database in my new organization. We run closely 500 reports overnight from this DB. I notice that all the report scripts create tables in TempDB and then populate the final report. TempDB size is 6 GB. There is no dependency set up for these report scripts, which are called from PowerShell.
Is it a good practice to use TempDB extensively in this manner? Or is it better to create all the staging tables in the production database itself and drop them after the report is generated?
Thanks,
Roopesh
Temporary tables always gets created in TempDb. However, it is not necessary that size of TempDb is only due to temporary tables. TempDb is used in various ways
Internal objects (Sort & spool, CTE, index rebuild, hash join etc)
User objects (Temporary table, table variables)
Version store (AFTER/INSTEAD OF triggers, MARS)
So, as it is clear that it is being use in various SQL operations so size can grow due to other reasons also. However, in your case if your TempDb has sufficient space to operate normally and if your internal process is using TempDb for creating temporary tables and it is not an issue. You can consider TempDb as an toilet for SQL Server.
You can check what is causing TempDb to grow its size with below query
SELECT
SUM (user_object_reserved_page_count)*8 as usr_obj_kb,
SUM (internal_object_reserved_page_count)*8 as internal_obj_kb,
SUM (version_store_reserved_page_count)*8 as version_store_kb,
SUM (unallocated_extent_page_count)*8 as freespace_kb,
SUM (mixed_extent_page_count)*8 as mixedextent_kb
FROM sys.dm_db_file_space_usage
if above query shows,
Higher number of user objects then it means that there is more usage of Temp tables , cursors or temp variables
Higher number of internal objects indicates that Query plan is using a lot of database. Ex: sorting, Group by etc.
Higher number of version stores shows Long running transaction or high transaction throughput
You can monitor TempDb via above script and identify the real cause of its growth first. However, 60 GB is quite a small database with 6GB TempDB size is fairly acceptable.
Part of above answer is copied from my other answer from SO.
I have a problem when it comes to delete a large table from my database (170GB).
When i "elete this large table by right click > delete I do not get the storage space free again. Of course the table is off the database but the database needed space does not shrink
Can anyone tell me what is wrong?
Tables are stored in table spaces. These are allocated to the database, regardless of whether the space is actually used to store tables (or indexes or anything else).
When you delete the table, you have freed space in the table space. The space is available to the database for your next table (or whatever). You need to either drop or shrink the table space to release the space back to the operating system.
A place to start is with dbcc shrinkfile, documented here.
Short answer:
Run sp_clean_db_free_space, before shrinking. My assumption is that you've tried shrinking the files, but if not that question has been answered.
Parenthetical statement:
You shouldn't shrink databases if you can avoid it.
Long answer: The behavior you see is the result of Ghost Records. To understand more about this at a system level read this article: Inside the Storage Engine: Ghost cleanup in depth.
I have a test database with 1 table having some 50 million records. The table initially had 50 columns. The table has no indexes. When I execute the "sp_spaceused" procedure I get "24733.88 MB" as result. Now to reduce the size of this database, I remove 15 columns (mostly int columns) and run the "sp_spaceused", I still get "24733.88 MB" as result.
Why is the database size not reducing after removing so many columns? Am I missing anything here?
Edit: I have tried database shrinking but it didn't help either
Try running the following command:
DBCC CLEANTABLE ('dbname', 'yourTable', 0)
It will free space from dropped variable-length columns in tables or indexed views. More information here DBCC CLEANTABLE and here Space used does not get changed after dropping a column
Also, as correctly pointed out on the link posted on the first comment to this answer. After you've executed the DBCC CLEANTABLE command, you need to REBUILD your clustered index in case the table has one, in order to get back the space.
ALTER INDEX IndexName ON YourTable REBUILD
When any variable length column is dropped from table, it does not reduce the size of table. Table size stays the same till Indexes are reorganized or rebuild.
There is also DBCC command DBCC CLEANTABLE, which can be used to reclaim any space previously occupied with variable length columns. Here is the syntax:
DBCC CLEANTABLE ('MyDatabase','MySchema.MyTable', 0)WITH NO_INFOMSGS;
GO
Raj
The database size will not shrink simply because you have deleted objects. The database server usually holds the reclaimed space to be used for subsequent data inserts or new objects.
To reclaim the space freed, you have to shrink the database file. See How do I shrink my SQL Server Database?
I have a very large table in my database and I am starting to get this error
Could not allocate a new page for
database 'mydatabase' because of
insufficient disk space in filegroup
'PRIMARY'. Create the necessary space
by dropping objects in the filegroup,
adding additional files to the
filegroup, or setting autogrowth on
for existing files in the filegroup.
How do you fix this error? I don't understand the suggestions there.
If you're using SQL Express you may be hitting the maximum database size limit (or more accurately the filegroup size limit) which is 4GB for versions up to 2005, 10GB for SQL Express 2008 onwards. That size limit excludes the log file.
There isn't really much to add - it pretty much tells you what you need to do in the error message.
Each object (Table, SP, Index etc) you create in SQL is created on a filegroup. The default filegroup is PRIMARY. It is common to create multiple filegroups that span over many disks. For instance you could have a filegroup named INDEXES to store all of your Indexes. Or if you have one very large table you could move this on to a different filegroup.
You can allocate space to a filegroup, say 2GB. If Auto Grow is not enabled once the data in the filegroup reaches 2GB SQL Server cannot create any more objects. This will also occur is the disk that the filegroup resides on runs out of space.
I'm not really sure what else to add - as I said previously, the error message pretty much tells you what is required.
If you are using client tools (MSDE) then the data in the filegroup reaches 2GB, SQL Server cannot create any more objects.
Use DBCC shrinkfile statement to shrink file...
USE databasename ;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE databasename
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (databasename_Log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE databasename
SET RECOVERY FULL;
GO
We have a weekly maintenance plan to shrink all user databases and rebuild their indexes. This has working fine until we created a read-only database, now each time the plan runs it fails when it starts processing this database due to its read only state.
As far as I can see we have two options remove the read only flag from the database, this is possible but as the database is only updated once a quarter it makes sense from a performance point of view to make use of the read-only feature. Or manually select the database that the plan should run for i.e. all the users databases apart from the read only one, this then requires people to remember to add any new databases into the plan.
Does anyone have any suggestions of a better way of doing this?
Thanks
Neil
why are you shrinking the database in the first place?
also there's no need to maintain read opnly db's like that.
I'd remove the read only flag if you don't want to customise the maint plan.
Why are you shrinking DBs too? If the database grows to a given size, then this is probably it's natural current size.
Also remember that an index rebuild (rule of thumb) require free space of 120% of target table size. Eg 500 MB table needs 600 MB free space.
It's pointless to shrink then rebuild... and you'll have horrendous file fragmentation too
I suppose could modify the maintenance plan to start with a 'Execute T-SQL Statement' step, which removes the readonly flag (ALTER DATABASE database-name SET READ_WRITE) and add a final step to reset it:
ALTER DATABASE database-name SET READ_ONLY