Approximate disk space consumption of rows on SQL Server - sql

I'd like to understand, what causes the size of a SQL Server 12 database. The mdf has 21.5 GB. Using the "Disk Usage by Top Tables" report in SQL Server Management Studio, I can see that 15.4 GB are used by the "Data" of one table. This table has 1,691 rows of 4 columns (int, varchar(512), varchar(512), image). I assume the image column is responsible for most of the consumption. But
Select (sum(datalength(<col1>)) + ... )/1024.0/1024.0 as MB From <Table>
only gives 328.9 MB.
What might be the reason behind this huge discrepancy?
Additional information:
For some rows the image column is updated regularly.
This is a screenshot of the report:
If we can trust it, indices or unused space should not be the cause.

Maybe you are using a lot of indexes per table, these will all add up. Maybe your auto-grow settings are wrong.

The reason was a long running transaction on another unrelated database (!) on the same SQL Server instance. The read committed snapshot isolation level filled the version store. Disconnecting the other application reduced the memory usage to a sensible amount.

Related

SQL Server - Memory quota error during migration to in-memory table

We are currently migrating to in-memory tables on SQL Server 2019 Standard Edition. The disk based table is 55GB data + 54Gb of indexes (71M records). RAM is 900 GB. But during data migration (INSERT statement) we get an error message:
Msg 41823, Level 16, State 109, Line 150
Could not perform the operation because the database has reached its quota for in-memory tables. This error may be transient. Please retry the operation.
The in-memory file is “unlimited”, so it looks strange since SQL Server 2019 should not have any size restrictions for in-memory tables.
Why do you think in-memory data size in a single mem-opt table is unlimited on standard edition?
From Memory Limits in SQL Server 2016 SP1 (all of which still applies according to 2019 docs):
Each user database on the instance can have an additional 32GB allocated to memory-optimized tables, over and above the buffer pool limit.
So, you can do what you want, I suppose, but you'll have to spread it across multiple databases. You won't be able to store more than 32GB in a single mem-opt table or even in multiple mem-opt tables in a single database.
Cropped and probably inappropriately-scaled screenshot from the 2019 docs:

Why will my SQL Transaction log file not auto-grow?

The Issue
I've been running a particularly large query, generating millions of records to be inserted into a table. Each time I run the query I get an error reporting that the transaction log file is full.
I've managed to get a test query to run with a reduced set of results and by using SELECT INTO instead of INSERT into as pre built table. This reduced set of results generated a 20 gb table, 838,978,560 rows.
When trying to INSERT into the pre built table I've also tried using it with and without a Cluster index. Both failed.
Server Settings
The server is running SQL Server 2005 (Full not Express).
The dbase being used is set to SIMPLE for recovery and there is space available (around 100 gb) on the drive that the file is sitting on.
The transaction log file setting is for File Growth of 250 mb and to a maximum of 2,097,152 mb.
The log file appears to grow as expected till it gets to 4729 mb.
When the issue first appeared the file grow to a lower value however i've reduced the size of other log files on the same server and this appears to allow this transaction log file grow further by the same amount as the reduction on the other files.
I've now run out of ideas of how to solve this. If anyone has any suggestion or insight into what to do it would be much appreciated.
First, you want to avoid auto-growth whenever possible; auto-growth events are HUGE performance killers. If you have 100GB available why not change the log file size to something like 20GB (just temporarily while you troubleshoot this). My policy has always been to use 90%+ of the disk space allocated for a specific MDF/NDF/LDF file. There's no reason not to.
If you are using SIMPLE recovery SQL Server is supposed manage the task of returning unused space but sometimes SQL Server does not do a great job. Before running your query check the available free log space. You can do this by:
right-click the DB > go to Tasks > Shrink > Files.
change the type to "Log"
This will help you understand how much unused space you have. You can set "Reorganize pages before releasing unused space > Shrink File" to 0. Moving forward you can also release unused space using CHECKPOINT; this may be something to include as a first step before your query runs.

Moving data from one table to another in Sql Server 2005

I am moving around 10 million data from one table to another in SQL Server 2005. The Purpose of Data transfer is to Offline the old data.
After some time it throws an error Description: "The LOG FILE FOR DATABASE 'tempdb' IS FULL.".
My tempdb and templog is placed in a drive (other than C drive) which has around 200 GB free. Also my tempdb size in database is set to 25 GB.
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Please let me know other factors and I cannot experiment much as I am working on Production database so can you please let me know if they changes will have some other impact.
Thanks in Advance.
You know the solution. Seems you are just moving part of data to make your queries faster.
I am agree with your solution
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Go ahead
My guess is that you're trying to move all of the data in a single batch; can you break it up into smaller batches, and commit fewer rows as you insert? Also, as noted in the comments, you may be able to set your destination database to SIMPLE or BULK-INSERT mode.
Why are you using Log file at all? Copy your data (Data and Logfile) then set the mode on SIMPLE and run the transfer again.

table size not reducing after purged table

I recently perform a purging on my application table. total record of 1.1 millions with the disk space used 11.12GB.
I had deleted 860k records and remain 290k records, but why my space used only drop to 11.09GB?
I monitor the detail on report - disk usage - disk space used by data files - space used.
Is it that i need to perfrom shrink data file? This has been puzzle me long time.
For MS SQL Server, rebuild the clustered indexes.
You have only deleted rows: not reclaimed space.
DBCC DBREINDEX or ALTER INDEX ... WITH REBUILD depending on verison
(It's MS SQL because the disk space report is in SSMS)
You need to explicitly call some operation (specific to your database management system) that will shrink the data file. The database engine doesn't shrink the file when you delete records, that's for optimization purposes - shrinking is time-consuming.
I think this is like with mail folders in Thunderbird: If you delete something, it's just marked as deleted, but to get higher performance, the space isn't freed. So most of your 11.09 GB will now contain either your old data or 0's. Shrink data file will "compress" (or "clean") this by creating a new file that'll only contain the actual data that is left.
Probably you need to shrink the table. I know that SQL server doesn't do it by default for you, I would guess this is for reasons of performance, maybe other DBs are the same.

Slow MS SQL 2000, lots of timeouts. How can I improve performance?

I found this script on SQL Authority:
USE MyDatabase
GO
EXEC sp_MSforeachtable #command1=“print ’?' DBCC DBREINDEX (’?', ’ ’, 80)”
GO
EXEC sp_updatestats
GO
It has reduced my insert fail rate from 100% failure down to 50%.
It is far more effective than the reindexing Maintenance Plan.
Should I also be reindexing master and tempdb?
How often? (I write to this database 24 hrs a day)
Any cautions that I should be aware of?
RAID 5 on your NAS? That's your killer.
An INSERT is logged: it writes to the .LDF (log) file. This file is 100% write (well, close enough).
This huge write to read ratio generates a lot of extra writes per disk in RAID 5.
I have an article in work (add later): RAID 5 writes 4 times as much per disk than RAID 10 in 100% write situations.
Solutions
You need to split your data and log files for your database at least.
Edit: Clarified this line:
The log files need go to RAID 1 or RAID 10 drives. It's not so important for data (.MDF) files. Log files are 100% write so benefit from RAID 1 or RAID 10.
There are other potential isues too such as fragmented file system or many Vlog segments (depending on how your database has grown), but I'd say your main issue is RAID 5.
For a 3TB DB, I'd also stuff as much RAM as possible in (32GB if Windows Advanced/Enterprise) and set PAE/AWE etc. This will mitigate some disk issues but only for data caching.
Fill factor 85 or 90 is the usual rule of thumb. If your inserts are wide and not strictly monotonic (eg int IDENTITY column) then you'll have lots of page splits with anything higher.
I'm not the only one who does not like RAID 5: BAARF
Edit again:
Look for "Write-Ahead Logging (WAL) Protocol" in this SQL Server 2000 article. It's still relevant: it explains why the log file is important.
I can't find my article on how RAID 5 suffers compared to RAID 10 under 100% write loads.
Finally, SQL Server does I/O in 64k chunks: so format NTFS with 64k clusters.
This could be anything at all. Is the system CPU bound? IO bound? Is the disk too fragemented? Is the system paging too much? Is the network overloaded?
Have you tuned the indexes? I don't recall if there was an index tuning wizard in 2000, but at the least, you could run the profiler to create a workload that could be used by the SQL Server 2005 index tuning wizard.
Check out your query plans also. Some indexes might not be getting used or the SQL could be wholly inefficient.
What table maintenance do you have?
is all the data in the tables relevant to todays processing?
Can you warehouse off some data?
What is your locking like? Are you locking the whole table?
EDIT:
The SQL profiler shows all interactions with the SQL Server. It should be a DBAs lifeblood.
Thanks for all of the help. I'm not there yet, but getting better.
I can't do much about hardware constraints.
All available RAM is allowed to SQL
Fillfactor is set at 95
Using profiler, an hour's trace offered index tuning with suggested increase of 27% efficiency.
As a result, I doubled the amount of successful INSERTS. Now only 1 out of 4 are failing.
Tracing now and will tune after to see if it gets better.
Don't understand locking yet.
For those who maintain SQL Server as a profession, am I on the right track?