Deleting column doesn't reduce database size - sql

I have a test database with 1 table having some 50 million records. The table initially had 50 columns. The table has no indexes. When I execute the "sp_spaceused" procedure I get "24733.88 MB" as result. Now to reduce the size of this database, I remove 15 columns (mostly int columns) and run the "sp_spaceused", I still get "24733.88 MB" as result.
Why is the database size not reducing after removing so many columns? Am I missing anything here?
Edit: I have tried database shrinking but it didn't help either

Try running the following command:
DBCC CLEANTABLE ('dbname', 'yourTable', 0)
It will free space from dropped variable-length columns in tables or indexed views. More information here DBCC CLEANTABLE and here Space used does not get changed after dropping a column
Also, as correctly pointed out on the link posted on the first comment to this answer. After you've executed the DBCC CLEANTABLE command, you need to REBUILD your clustered index in case the table has one, in order to get back the space.
ALTER INDEX IndexName ON YourTable REBUILD

When any variable length column is dropped from table, it does not reduce the size of table. Table size stays the same till Indexes are reorganized or rebuild.
There is also DBCC command DBCC CLEANTABLE, which can be used to reclaim any space previously occupied with variable length columns. Here is the syntax:
DBCC CLEANTABLE ('MyDatabase','MySchema.MyTable', 0)WITH NO_INFOMSGS;
GO
Raj

The database size will not shrink simply because you have deleted objects. The database server usually holds the reclaimed space to be used for subsequent data inserts or new objects.
To reclaim the space freed, you have to shrink the database file. See How do I shrink my SQL Server Database?

Related

SQL Server : delete from large table

We have a log table in our production database which is hosted on Azure. The table has grown to about 4.5 million records and now we just want to delete all the records from that log table.
I tried running
Delete from log_table
And I also tried
Delete top 100 from log_table
Delete top 20 from log_table
When I run the queries, database usage jumps to 100% and the query just hangs. I believe this is because of the large number of records in the table. Is there a way we can overcome the issue?
To delete all rows in a big table, you can use the truncate table command.
This command is used to remove all rows from a table, and it is a faster operation than using the DELETE command
Ex:
TRUNCATE TABLE table_name;
In an Azure SQL database, you have several options to choose from, for you to control the size of your database and its log file. First of all, let's start with some defenitions:
Data space used is the amount of space used to store database
data. Generally, space used increases (decreases) on inserts
(deletes). In some cases, the space used does not change on inserts
or deletes depending on the amount and pattern of data involved in
the operation and any fragmentation. For example, deleting one row
from every data page does not necessarily decrease the space used.
Data space allocated is the amount of formatted file space made
available for storing database data. The amount of space allocated
grows automatically, but never decreases after deletes. This behavior
ensures that future inserts are faster since space does not need to
be reformatted.
Data space allocated but unused represents the maximum amount of free
space that can be reclaimed by shrinking database data files.
Data max size is the maximum amount of space that can be used for
storing database data. The amount of data space allocated cannot grow
beyond the data max size.
In Azure SQL Database, to shrink files you can use either DBCC SHRINKDATABASE or DBCC SHRINKFILE commands:
DBCC SHRINKDATABASE shrinks all data and log files in a database using a single command. The command shrinks one data file at a time, which can take a long time for larger databases. It also shrinks the log file, which is usually unnecessary because Azure SQL Database shrinks log files automatically as needed.
DBCC SHRINKFILE command supports more advanced scenarios:
It can target individual files as needed, rather than shrinking all files in the database.
Each DBCC SHRINKFILE command can run in parallel with other DBCC SHRINKFILE commands to shrink multiple files at the same time and reduce the total time of shrink, at the expense of higher resource usage and a higher chance of blocking user queries, if they are executing during shrink.
If the tail of the file does not contain data, it can reduce allocated file size much faster by specifying the TRUNCATEONLY argument. This does not require data movement within the file.
Now going on to some useful SQL queries:
-- Shrink database data space allocated.
DBCC SHRINKDATABASE (N'database_name');
-- Review file properties, including file_id and name values to reference in shrink commands
SELECT file_id,
name,
CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8 / 1024. AS space_used_mb,
CAST(size AS bigint) * 8 / 1024. AS space_allocated_mb,
CAST(max_size AS bigint) * 8 / 1024. AS max_file_size_mb
FROM sys.database_files
WHERE type_desc IN ('ROWS','LOG');
-- Shrink database data file named 'data_0` by removing all unused at the end of the file, if any.
DBCC SHRINKFILE ('data_0', TRUNCATEONLY);
GO
When it comes to the log file of a database, you can use the following queries:
-- Shrink the database log file (always file_id 2), by removing all unused space at the end of the file, if any.
DBCC SHRINKFILE (2, TRUNCATEONLY);
... and to set the database log file to automatically shrink itself and keep a certain amount of space, you may use:
-- Enable auto-shrink for the current database.
ALTER DATABASE CURRENT SET AUTO_SHRINK ON;
For reference purposes, I didn't get this information myself, but extracted it from this official article in Microsoft Docs

Azure SQL DB - added size restriction on NVARCHAR column and the size of my DB bloated. Why did this happen?

I have an Azure SQL DB Table with about 700k records and 9 columns. Most of those columns were initially set to NVARCHAR(max)
I decided to then apply a SQL Script to limit the column size as I felt this should optimize performance. I ran the following queries inside Azure Data Studio (I'm on a mac) against each of the columns:
ALTER TABLE [MyTable] ALTER COLUMN [ColumnInQuestion1] nvarchar(500) NULL;
ALTER TABLE [MyTable] ALTER COLUMN [ColumnInQuestion2] nvarchar(500) NULL;
etc etc...
This query took around 20 mins to complete. However after I looked at my DB, I noticed the size of my DB in Azure increased by almost 30% - from around 1.5GB to just over 1.9GB.
Why did this happen? Does Azure/Data Studio run some kind of a backup for the table during the query execution that it doesn't get rid of afterwards or does limiting column length really increase the DB size?
Moving from "MAX" to "500" column size changed how SQL allocates the data in your table rows. Pages (logical chunks of data) might be left over from how the rows were handled with the "MAX" columns.
REBUILD any index that had the altered column(s) in it.
Run the DBCC CLEANTABLE command on your table.
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-cleantable-transact-sql?view=sql-server-ver16

SQL not able to get storage space back when delete table

I have a problem when it comes to delete a large table from my database (170GB).
When i "elete this large table by right click > delete I do not get the storage space free again. Of course the table is off the database but the database needed space does not shrink
Can anyone tell me what is wrong?
Tables are stored in table spaces. These are allocated to the database, regardless of whether the space is actually used to store tables (or indexes or anything else).
When you delete the table, you have freed space in the table space. The space is available to the database for your next table (or whatever). You need to either drop or shrink the table space to release the space back to the operating system.
A place to start is with dbcc shrinkfile, documented here.
Short answer:
Run sp_clean_db_free_space, before shrinking. My assumption is that you've tried shrinking the files, but if not that question has been answered.
Parenthetical statement:
You shouldn't shrink databases if you can avoid it.
Long answer: The behavior you see is the result of Ghost Records. To understand more about this at a system level read this article: Inside the Storage Engine: Ghost cleanup in depth.

Teradata Drop Column returns with "no more room"

I am trying to drop a varchar(100) column of a 150 GB table (4.6 billion records). All the data in this column is null. I have 30GB more space in the database.
When I attempt to drop the column, it says "no more room in database XY". Why does such an action needs so much space?
The ALTER TABLE statement needs a temporary storage for the altered version before overwriting the original table. I guess the the table that you are trying to alter occupies at least 1/3 of your total storage size
This could happen for a variety of reasons. It's possible that one of the AMP's in your database are full, this would cause that error even with a minor table alteration.
try running the following SQL to check space
select VProc, CurrentPerm, MaxPerm
from dbc.DiskSpace
where DatabaseName='XY';
also, you should check to see what column your primary index is on in this very large table. if the table is not skewed properly, you could also run into space issues when trying to alter a table or by running a query against it.
For additional suggestions I found a decent article on the kind of things you may want to investigate when the "no more room in database" error occurs - Teradata SQL Tutorial. Some of the suggestions include:
dropping any intermediary work or "sandbox" tables
implementing single value or multi-value compression.
dropping unwanted/unnecessary secondary indexes
removing data in dbc tables like accesslog or dbql tables
remove and archive old tables that are no longer used.

How can I check the size of a SQL table row in b/kb/mb?

I'm aware of a sp_spaceused stored procedure to check the size of a SQL table or it's database, but is there a way to check the size of a single table row?
What I've Tried:
Running sp_spaceused on the table before and after inserting a row (it doesn't change the result of the sp_spaceused query as the row is not substantial enough in size)
I often use a script I found here to get the size and even more infos. I am shure you can adapt it to your needs.