Copying data want to keep to a new table and then rename - sql

I have a Windows 2008 Ent. Server running SQL 2008 Ent. There's a SQL DB that is approximately 100Gb in size. One of the tables in this database accounts for 85Gb. After doing some analysis this one table has got to this size from one event being logged which has no value so it's been agreed that we can purge all instances of this event from this table. The table should be only around 1Gb in size once all occurrences of this event have been removed.
Someone suggested to me that a more efficient approach rather than going through trying to delete rows which include this event id to perhaps create the table again with no data. Copy the data that we want to keep to the new table (ommitting any row with event id). Rename the original table to something and then rename the new table to original table name.
The approach makes sense and I can see that perhaps this will be faster than deleting 100 rows at a time out of the original table. Can I please have some advice on how to go about this?

The query to copy everything to the new table goes like this:
SELECT * INTO dbo.NewTable FROM dbo.OldTable WHERE [event id] <> 6030
Then:
ALTER TABLE dbo.OldTable
RENAME TO dbo.OldTable_History;
And:
ALTER TABLE dbo.NewTable
RENAME TO dbo.OldTable;
If you want to create the table manually do it then and after that run this:
INSERT INTO dbo.NewTable
SELECT * FROM dbo.OldTable WHERE [event id] <> 6030

Related

What does it do to create a Partition Switching table? Then how can I fill these tables?

What does it do to create a Partition Switching table?
Then how can I fill these tables?
A partition switching table is a normal table. But it must be identical to the table you want to switch into.
With regards as to how do you fill a table? You might need to post more info... how are you filling your other tables? It's just another table. Fill it however you see fit.
The trick is....
once you have your table filled and you want to switch it in against another table you run this:
TRUNCATE TABLE targettable;
ALTER TABLE sourcetable SWITCH TO targettable;
You can also add this though I've never tested it (I will be today as I just found it today, here https://littlekendra.com/2017/01/19/why-you-should-switch-in-staging-tables-instead-of-renaming/)
TRUNCATE TABLE targettable;
ALTER TABLE sourcetable SWITCH TO targettable
WITH ( WAIT_AT_LOW_PRIORITY
(MAX_DURATION = 1 MINUTES, ABORT_AFTER_WAIT = BLOCKERS)
);
This replaces targettable with the data in sourcetable
As always, someone has done all this before you and me and blogged it, and it's only a google away
https://sqlsunday.com/2014/08/24/reloading-fact-tables-with-zero-downtime/
The basic idea... It's just a simple means of replacing an empty table with a populated table, without having to drop the empty table and rename the populated table. The only caviat is that both tables MUST have the exact same structure, including any and all indexes.
So, say you have 10 million rows of data and you want to delete 9 million of those rows. Deleting 9 million rows in one pop is likely to blow up your tempdb and your transaction logs. As an alternative, you can to a "SELECT INTO" to put the 1 million rows you want to keep into a new table (minimally logged)... add indexes that match the original table... truncate the original table (minimally logged) and the switch the partitions (minimally logged).

Adding Columns - SQL Server Tables

I have been asked to look into a manual process that one of my colleagues is completing every now and again.
He sometimes needs to add a new column onto a large table (200 million rows), it is taking him more than 1 hour to do this. Before you ask, yes, the columns are nullable but sometimes the new column will have 90% data in it.
Instead of adding a new column to the existing table, he...
Creates a new table
Select (*) from old table (inserts into new)
Adds the new column as part of his script
Then he deletes the old table and renames the new table back to the original, adds index and then compresses. He says it much quicker like that.
If this is the best way then I will try and write SSIS package to try and make the process more seamless
Any advice is welcome!
Thanks
creating a new table structure and moving all the data to that table and delete the prior table is a good way just for a few data,you can do it by wizard in SQL Server. but it is the worst way for solving this problem(millions of data).
for large amount of data (millions of records) you should use "Alter Table".
Alter Table MyTable
ADD NewColumn nvarchar(10) null
the new column will add to the table as the last column.
if you use this script it takes less that one second because all data will not moving,you just add a new column in to the table.
but if you use the wizard method as you mentioned with millions of data records it takes hours.
as Ali says
alter Table MyTable
ADD NewColumn nvarchar(10) null
but then to fill in 90% of data. As he has a table already with it in and the key he's joining on in the copy so this is all he needs:
UPDATE MyTable
SET [NewColumn] = b.[NewColumn]
FROM MyTable a INNER JOIN NewColumnTable b ON a.[KeyField]= b.[KeyField]
would be a lot quicker. You could do it in SSIS but if this happens a lot then not really worth it for a few lines of SQL.

How to delete all data then insert new data

I have a process that runs every 60 minutes. On one table I need to remove all data then insert records from a different table. The problem is it takes a long time to delete and reinsert the data. When the table has no data I am afraid the users will see this. Is there a way to refresh the data without users seeing this?
If you want to remove all data from the table then use the TRUNCATE
TABLE instead of delete - It'll do it faster.
As for the insert it is a bit hard to say because you did not give any details but what you can try is:
Option 1 - Using temp table
create table table_temp as select * from original_table where rownum < 1;
//insert into table_temp
drop table original_table;
Exec sp_rename 'table_temp' , 'original_table'
Option 2 - Use 2 tables "Active-Passive" -
Have 2 tables for the data and a view to select over them. The view will join with a third table that will specify from which of the tables to select. kind of an "active-passive" concept.
To demonstrate concept:
with active_table as ( select 'table1_active' active_table )
select 1 data
where 'table1_active' in (select * from active_table)
union all
select 2
where 'table2_active' in (select * from active_table)
//This returns only one record with the "1"
Are you truncating instead of deleting? A truncate (while logged) is much, much, faster then a delete.
If you cannot truncate try deleting 1000-10000 rows at a time (smaller log buildup and on deleting large amounts of rows great increase in speed.)
If you really want fast performance you can create a second table, fill it with data, and then drop the first table and rename the second table as the first table. You will lose all the permissions on the table when you do this so be sure to reapply the permissions to the renamed table.
If you are deleting all rows in a table, you can consider using a TRUNCATE statement against the table instead of a DELETE. It will speed up part of your process. Keep in mind that this will reset any identity seeds you may have on the table.
As suggested, you can wrap this process in a transaction and depending on how you set your transaction isolation level, you can control what your users will see if they query the data during the transaction.
Make it sequence based, your copied in records all have have a series number (all the same for all copied in records) and another file holds which sequence is active, and you always select on a join to this table - when you copy in new records they have a new sequence that is not yet active, when they are all copied in, then the sequence table is updated to the new sequence - the redundant sequence records are deleted at your leisure.
Example
Let's suppose your table has field SeriesNo added and table ActiveSeries has field SeriesNo.
All queries of your table:
SELECT *
FROM YourTable Y
JOIN ActiveSeries A
ON A.SeriesNo = Y.SeriesNo
then updating SeriesNo in ActiveSeries makes new series of records available instantly.
I would follow below approach. While I troubleshoot why the delete and reinsert is taking time.
Create a new table ( t1 ) which has same data as oldtable ( maintable )
Now do your stuff on t1.
When your stuff is done, rename t1 to maintable.

Huge Data Base in oracle

I had about 20,000,000 records
in a table (random data), and then I added empty column to that table...
but when I update that table to fill that column, the process was broken down..
I tried to use the cursor and the index but no results..
do you have any fast solution or any alternative solution?
Thank you in advance :)
Maybe the fastest way would be to create new_table as select * from existing table, and then inside the select statment of CTAS, calculate the value of the new column. After that, you can rename old table to something like table_bckp, then rename new table to the original table name, and then apply constraints, indexes, and other scripts previously saved from old table definitions.

Importing Data From One database to another

I wanted to know that, Suppose I have a table in one database with say 1000 records and I have similar table in another database with say 500 records.
Now my question is If I will try to import data from say DB1.Tbl1 to DB2.Tbl1, then what will happen? Is there any possibilities of duplicacy of the data?
I wanted the records of DB1.Tbl1 to copy into DB2.Tbl1 table. Please clear my confusion.
If you have same data in both tables you can first truncate 2nd table after that you can import data from 1st table by Insert Command or "import data" task
Try this
INSERT INTO DB1.dbo.Tbl1 SELECT * FROM DB2.dbo.Tbl2
This just moves the data. If you want to move the table definition, you have to do something else.
Note that, SQL Server Management Studio's "Import Data" task (right-click on the DB name, then tasks) will do most of this for you. Run it from the database you want to copy the data into.
If the tables don't exist it will create them for you, but you'll probably have to recreate any indexes. If the tables do exist, it will append the new data by default but you can adjust that (edit mappings) so it will delete all existing data.
Try this:
We can copy all columns from one table to another, existing table:
INSERT INTO table2
SELECT * FROM table1;
Or we can copy only the columns we want to into another, existing table:
INSERT INTO table2
(column_name(s))
SELECT column_name(s)
FROM table1;