SQL Server Table Partitioning, what is happening behind the scenes? - sql

I'm working with table partitioning on extremely large fact table in a warehouse. I have executed the script a few different ways. With and without non clustered indexes. With indexes it appears to dramatically expand the log file while without the non clustered indexes it appears to not expand the log file as much but takes more time to run due to the rebuilding of the indexes.
What I am looking for is any links or information as to what is happening behind the scene specifically to the log file when you split a table partition.

I think it isn't to hard to theorize what is going on (to a certain extent). Behind the scenes each partition is given a different HoBT, which in normal language means each partition is in effect sitting on it's own hidden table.
So theorizing the splitting of a partition (assuming data is moving) would involve:
inserting the data into the new table
removing data from the old table
The NC index can be figured out, but depending on whether there is a clustered index or not, the theorizing will alter. It also matters whether the index is partition aligned or not.
Given a bit more information on the table (CL or Heap) we could theorize this further

If the partition function is used by a
partitioned table and SPLIT results in
partitions where both will contain
data, SQL Server will move the data to
the new partition. This data movement
will cause transaction log growth due
to inserts and deletes.
This is from an article by Microsoft on Partitioned Table and Index Strategies
So looks like its doing a delete from old partition and and insert into the new partition. This could explain the growth in t-log.

Related

Postgres extracting data from a huge table based on non-indexed column

We have a table on production which has been there for quite some time and the volume of that table is huge(close to 3 TB), since most of the data in this table is stale and unused we are planning to get rid of historical data which does not have any references.
There is a column "active" with type boolean which we can use to get rid of this data, however this column is not indexed.
Considering the volume of the table i am not too sure whether creation of a new index is going to help, i tried to incrementally delete the inactive rows 100K at a time but still the volume is so huge that this is going to take months to clear up.
The primary key of the table is of type UUID, i thought of creating a new table and inserting only the valued with active="true" as
insert
into
mytable_active
select
*
from
mytable
where
is_active = true;
But as expected this approach also fails because of the volume and keeps running like forever.
Any suggestions approaches would be most welcome.
When you need to delete a lot of rows quickly, partitioning is great......... when the table is already partitioned.
If there is no index on the column you need, then at least one full table scan will be required, unless you can use another index like "date" or something to narrow it down.
I mean, you could create an index "WHERE active" but that would also require the full table scan you're trying to avoid, so... meh.
First, DELETE. Just don't, not even in small bits with LIMIT. Not only will it write most of the table (3TB writes) but it will also write it to the WAL (3 more TB) and it will also update the indexes, and write that to the WAL too. This will take forever, and the random IO from index updates will nuke your performance. And if it ever finishes, you will still have a 3TB file, with most of it unallocated. Plus indexes.
So, no DELETE. Uh, wait.
Scenario with DELETE:
Swap the table with a view "SELECT * FROM humongous WHERE active=true" and add triggers or rules on the view to redirect updates/inserts/delete to the underlying table. Make sure triggers set all new rows with active=true.
Re-create each index (concurrently) except the primary key, adding "WHERE active=true". This will require a full table scan for the first index, even if you create the index on "active", because CREATE INDEX WHERE doesn't seem to be able to use another index to speed up when a WHERE is specified.
Drop the old indices
Note the purpose of the view is only to ensure absolutely all queries have "active=true" in the WHERE, because otherwise, they wouldn't be able to use the conditional indices we just created, so each query would be a full table scan, and that would be undesirable.
And now, you can DELETE, bit by bit, with your delete from mytable where id in ( select id from mytable where active = false limit 100000);
It's a tradeoff, you'll have a large number of table scans to recreate indices, but you'll avoid the random IO from index update due to a huge delete, which is the real reason why you say it will take months.
Scenario with INSERT INTO new_table SELECT...
If you have inserts and updates running on this huge table, then you have a problem, because these will not be transferred to the new table during the operation. So a solution would be to:
turn off all the scripts and services that run long queries
lock everything
create new_table
rename huge_table to huge_old
create a view that is a UNION ALL of huge_table and huge_old. From the application point of view, this view replaces huge_table. It must handle priority, ie if a row is present in the new table, a row with the same id present in the old table should be ignored... so it will have to have a JOIN. This step should be tested carefully beforehand.
unlock
Then, let it run for a while, see if the view does not destroy your performance. At this point, if it breaks, you can easily go back by dropping the view and renaming the table back to its old self. I said to turn off all the scripts and services that run long queries because these might fail with the view, and you don't want to take a big lock while one long query is running, because that will halt everything until it's done.
add insert/update/delete triggers on the view to redirect the writes to new_table. Inserts go directly to the new table, updates will have to transfer the row, deletes will have to hit both tables, and UNIQUE constraints will be... interesting. This will be a bit complicated.
Now to transfer the data.
Even if it takes a while, who cares? It will finish eventually. I suppose if you have a 3TB table, you must have some decent storage, even if that's these old spinning things that we used to put data on, it shouldn't take more than a few hours if the IO is not random. So the idea is to only use linear IO.
Fingers crossed hoping the table does not have a big text column that is stored in separate TOAST table that is going to require one random access per row. Did you check?
Now, you might actually want it to run for longer so it uses less IO bandwidth, both for reads and writes, and especially WAL writes. It doesn't matter how long the query runs as long as it doesn't degrade performance for the rest of the users.
Postgres will probably go for a parallel table scan to use all the cores and all the IO in the box, so maybe disable that first.
Then I think you should try to avoid the hilarious (for onlookers) scenario where it reads from the table for half a day, not finding any rows that match, so the disks handle the reads just fine, then it finds all the rows that match at the end and proceeds to write 300GB to the WAL and the destination table, causing huge write contention, and you have to Ctrl-C it when you know, you just know it in your gut that it was THIS CLOSE to finishing.
So:
create bogus_table just like mytable but without indices;
insert into bogus_table select * from mytable;
10% of "active" rows is still 300GB so better check the server can handle writing a 300GB table without slowing down. Watch vmstat and check if iowait goes crazy, watch number of transactions per second, query latency, web server responsiveness, the usual database health stuff. If the phone rings, hit Ctrl-C and say "Fixed!"
After it's done a few checkpoints, Ctrl-C. Time to do the real thing.
Now to make this query take much longer (and therefore destroy much less IO bandwidth) you can add this to the columns in your select:
pg_sleep((random()<0.000001)::INTEGER * 0.1)
That will make it sleep for 0.1s every million rows on average. Adjust to taste while looking at vmstat.
You can also monitor query progress using hacks.
It should work fine.
Once the interesting rows have been extracted from the accursed table, you could move the old data to a data warehouse or something, or to cold storage, or have fun loading it into clickhouse if you want to run some analytics.
Maybe partitioning the new table would also be a good idea, before it grows back to 3TB. Or periodically moving old rows.
Now, I wonder how you backup this thing...
-- EDIT
OK, I have another idea, maybe simpler, but you'll need a box.
Get a second server with fast storage and setup logical replication. On this replica server, create an empty UNLOGGED replica of the huge table with only one index on the primary key. Logical replication will copy the entire table, so it will take a while. A second network card in the original server or some QoS tuning would help not blowing up the ethernet connection you actually use to serve queries.
Logical replication is row based and identifies rows by primary key, so you absolutely need to manually create that PK index on the slave.
I've tested it on my home box right now and it works very well. The initial data transfer was a bit slow, but that may be my network. Pausing then resuming replication transferred rows inserted or updated on the master during the pause. However, renaming the table seems to break it, so you won't be able to do INSERT INTO SELECT, you'll have to DELETE on the replica. With SSDs, only one PK index, the table set to UNLOGGED, it should not take forever. Maybe using btrfs would turn the random index write IO into linear IO due to its copy on write nature. Or, if the PK index fits in shared_buffers, just YOLO it and set checkpoint_timeout to "7 days" so it doesn't actually write anything. You'll probably need to do the delete in chunks so the replicated updates keep up.
When I dropped the PK index to speed up the deletion, then recreated it before re-enabling replication, it didn't catch up on the updates. So you can't drop the index.
But is there a way to only transfer the rows you want to keep instead of transferring everything and deleting, while also having the replica keep up with the master's updates?... It's possible to do it for inserts (just disable the initial data copy) but not for updates unfortunately. You'd need an integer primary key so you could generate bogus rows on the replica that would then be updated during replication... but you can't do that with your UUID PK.
Anyway. Once this is done, set the number of WAL segments to be kept on the master server to a very high value, to resume replication later without missing updates.
And now you can run your big DELETE on the replica. When it's done, vacuum, maybe CLUSTER, re-create all indexes, etc, and set the table to LOGGED.
Then you can failover to the new server. Or if you're feeling adventurous, you could replicate the replica's table back on the master, since it will have the same name it should be in another schema.
That should allow for very little downtime since all updates are replicated, the replica will always be up to date.
I would suggest:
Copy the active records to a temporary table
Drop the main table
Rename the temporary table to the main table name

What happens when I drop a column in SQL Server

If my understanding is right then the number of rows that are stored on one page in SQL Server is determined by the number of columns in the table and their datatypes. One I/O operation can read one page, so the more rows fits into one page, the more rows can be returned by one I/O operation, and your queries run faster.
I wonder what happens when you drop a column? Does SQL Server go back to the memory and "re-stores" the data, i.e what if I drop enough columns for more data to fit on one page? And if SQL Server does not do that automatically, can I force the process?
I'm removing a lot of text columns and a few IDs on the heavily used table, and I hoping that the I/O will improve after I drop the columns.
Dropping a column is a logical operation, not physical. No data gets modified. The column metadata gets marked as 'deleted' and will be ignored. The record size is unchanged. Read SQL Server table columns under the hood for more details explanation and clear examples demonstrating my claim.
As Stefan said, you have to rebuild the table (heap or clustered index) to 'reclaim' the space.
I'm removing a lot of text columns and a few IDs on the heavily used table, and I hoping that the I/O will improve after I drop the columns.
Use indexes to reduce IO. For unindexable ad-hoc analytical workloads, use columnstores. Read How to analyse SQL Server performance.
You can rebuild the clustered index to force SQL Server use the free space. Otherwise you have "holes" in the pages.

Drop/Rebuild indexes during Bulk Insert

I have got tables which has got more than 70 million records in it; what I just found that developers were dropping indexes before bulk insert and then creating again after the bulk insert is over. Execution time for the stored procedure is nearly 30 mins (do drop index, do bulk insert, then recreate index from scratch
Advice: Is this a good practice to drop INDEXs from table which has more than 70+ millions records and increasing by 3-4 million everyday.
Would it be help to improve performance by not dropping index before bulk insert ?
What is the best practice to be followed while doing BULK insert in BIG TABLE.
Thanks and Regards
Like everything in SQL Server, "It Depends"
There is overhead in maintaining indexes during the insert and there is overhead in rebuilding the indexes after the insert. The only way to definitively determine which method incurs less overhead is to try them both and benchmark them.
If I were a betting man I would put my wager that leaving the indexes in place would edge out the full rebuild but I don't have the full picture to make an educated guess. Again, the only way to know for sure is to try both options.
One key optimization is to make sure your bulk insert is in clustered key order.
If I'm reading your question correctly, that table is pretty much off limits (locked) for the duration of the load and this is a problem.
If your primary goal is to increase availability/decrease blocking, try taking the A/B table approach.
The A/B approach breaks down as follows:
Given a table called "MyTable" you would actually have two physical tables (MyTable_A and MyTable_B) and one view (MyTable).
If MyTable_A contains the current "active" dataset, your view (MyTable) is selecting all columns from MyTable_A. Meanwhile you can have carte blanche on MyTable_B (which contains a copy of MyTable_A's data and the new data you're writing.) Once MyTable_B is loaded, indexed and ready to go, update your "MyTable" view to point to MyTable_B and truncate MyTable_A.
This approach assumes that you're willing to increase I/O and storage costs (dramatically, in your case) to maintain availability. It also assumes that your big table is also relatively static. If you do follow this approach, I would recommend a second view, something like MyTable_old which points to the non-live table (i.e. if MyTable_A is the current presentation table and is referenced by the MyTable view, MyTable_old will reference MyTable_B) You would update the MyTable_old view at the same time you update the MyTable view.
Depending on the nature of the data you're inserting (and your SQL Server version/edition), you may also be able to take advantage of partitioning (MSDN blog on this topic.)

Delete large portion of huge tables

I have a very large table (more than 300 millions records) that will need to be cleaned up. Roughly 80% of it will need to be deleted. The database software is MS SQL 2005. There are several indexes and statistics on the table but not external relationships.
The best solution I came up with, so far, is to put the database into "simple" recovery mode, copy all the records I want to keep to a temporary table, truncate the original table, set identity insert to on and copy back the data from the temp table.
It works but it's still taking several hours to complete. Is there a faster way to do this ?
As per the comments my suggestion would be to simply dispense with the copy back step and promote the table containing records to be kept to become the new main table by renaming it.
It should be quite straightforward to script out the index/statistics creation to be applied to the new table before it gets swapped in.
The clustered index should be created before the non clustered indexes.
A couple of points I'm not sure about though.
Whether it would be quicker to insert into a heap then create the clustered index afterwards. (I guess no if the insert can be done in clustered index order)
Whether the original table should be truncated before being dropped (I guess yes)
#uriDium -- Chunking using batches of 50,000 will escalate to a table lock, unless you have disabled lock escalation via alter table (sql2k8) or other various locking tricks.
I am not sure what the structure of your data is. When does a row become eligible for deletion? If it is a purely ID based on date based thing then you can create a new table for each day, insert your new data into the new tables and when it comes to cleaning simply drop the required tables. Then for any selects construct a view over all the tables. Just an idea.
EDIT: (In response to comments)
If you are maintaining a view over all the tables then no it won't be complicated at all. The complex part is coding the dropping and recreating of the view.
I am assuming that you don't want you data to be locked down too much during deletes. Why not chunk the delete operations. Created a SP that will delete the data in chunks, 50 000 rows at a time. This should make sure that SQL Server keeps a row lock instead of a table lock. Use the
WAITFOR DELAY 'x'
In your while loop so that you can give other queries a bit of breathing room. Your problem is the old age computer science, space vs time.

SQL Server 2005: disk space taken by dropped columns

I have a big table in SQL Server 2005 that's taking about 3.5 GB of space (according to sp_spaceused). It has 10 million records, and several indexes.
I just dropped a bunch of columns from it, such that the record length got reduced to a half, and to my surprise it took zero time to do that. Obviously, sp_spaceused was still reporting the same taken space, SQL server hadn't really done anything when dropping the columns, other than marking them as "dropped".
So I moved all the data from this table into another new table, truncated it, and moved all the data back, so that it'd get all reconstructed.
Now, after that, data is taking 2.8 GB, which IS less than before, but I expected a bigger drop.
Is it possible that the fact that this table originally had these columns is still leaving something there?
Was truncating it not enough? Should I drop it and create it again with the smaller column set?
Or is the data really taking 2.8 GB?
Thanks!
You will need to rebuild the clustered index (assuming you have one - by default, your primary key is the clustered key).
ALTER INDEX (your clustered index) ON TABLE (your table) REBUILD
The data is really the leaf level of your clustered index - once you rebuild it, it will be "compacted" and the rows should be stored on much fewer data pages, reducing your database size, too.
If that doesn't help at all, you might also need to run a DBCC SHRINKDATABASE on your database to really reclaim the space. These two steps together should really get you some smaller database file!
Marc
How did you calculate that "expected a bigger drop"? Note that the data comes in 8K pages, which means that even if individual rows are smaller, that does not always mean you need less pages to store them.
For example (an extreme example), if your rows used to be 7.5K each, only one row per page would fit. You drop some columns, your row is 5K, but still it is one row per page.