Efficient way to change the table's filegroup - sql

I have around 300 tables which are located in different partition and now these tables are not in use for such huge data as it was. Now, I am getting space issue time to time and some of but valuable space is occupied by the 150 filegroups that was created for these tables so I want to change table's filegroup to any one instead of 150 FG and release the space by deleting these filegroups.
FYI: These tables are not holding any data now but defined many constraints and indices.
Can you please suggest me, how it can be done efficiently ?

To move the table, drop and then re-create its clustered index specifying the new FG. If it does not have a clustered index, create one then drop it.
It is best practice not to keep user data on primary FG. Leave that for system objects, and put your data on other file groups. But a lot of people ignore this...

I found few more information on the ways of changing the FG group of existing table:
1- Define clustered index in every object using NEW_FG (Mentioned in #under answer)
CREATE UNIQUE CLUSTERED INDEX <INDEX_NAME> ON dbo.<TABLE_NAME>(<COLUMN_NAME>) ON [FG_NAME]
2- If we can't define clustered index then copy table and data structure to new table, drop old and rename new to old as below
Changes Database's default FG to NEW_FG so that every table can be created using INTO, under that new FG by default
ALTER DATABASE <DATABASE> MODIFY FILEGROUP [FG_NAME] DEFAULT
IF OBJECT_ID('table1') IS NOT NULL
BEGIN
SELECT * INTO table1_bkp FROM table1
DROP TABLE table1
EXEC sp_rename table1_bkp, table1
END
After all the operation Database's default FG as before
ALTER DATABASE <DATABASE> MODIFY FILEGROUP [PRIMARY] DEFAULT
3- Drop table if feasible then create it again using NEW_FG
DROP TABLE table1
CREATE TABLE [table1] (
id int,
name nvarchar(50),
--------
) ON [NEW_FG]

Related

Oracle SQL primary key stuck

I ran into a curious problem. I am creating copies of currently existing tables and adding partitions to them.
The process is as follows:
Rename current constraints (can't drop them without dropping table itself because I will need data later)
Create a new partitioned table that structurally copies current one.
so I have MYTABLE (original) and PART_TABLE (new partitioned), including FKs
Copy data with INSERT INTO SELECT clause
Alter table with indexes and PKs
Rename tables so I end up with MYTABLE (new partitioned) and TRASH_TABLE (original)
In step 4 unfortunately, I got an error
ALTER TABLE MYTABLE ADD CONSTRAINT "PK_MYTABLE"
PRIMARY KEY ("MY_ID", "SEQUENCE")
USING INDEX LOCAL TABLESPACE INDEXSPACE;
SQL Error: ORA-00955: "name is already used by an existing object"
Now, I logically assumed that I simply forgot to rename the PK so I check TRASH_TABLE, but I see there the correctly renamed PK.
I also ran
SELECT *
FROM ALL_CONSTRAINTS
WHERE CONSTRAINT_NAME LIKE 'PK_MYTABLE'
and it returned 0 results. Same with table USER_CONSTRAINTS.
Renamed PKs are displaying correctly.
Another thing I noticed is that only PKs are locked this way. Adding FKs or UNIQUE constraints work just fine.
As stated in How to rename a primary key in Oracle such that it can be reused, the problem is that Oracle creates an index for the primary key. You need to rename the autogenerated index as well. As suggested by Tony Andrews, try
ALTER INDEX "PK_MYTABLE" RENAME TO "PK_MYTABLE_OLD";

Why my mdf file from new filegroup not growing?

I create a draft database with 2 tables: dbo.D and dbo.F, next I create a new filegroup for dbo.F and a file for this.
USE DEV
ALTER DATABASE DEV
ADD FILEGROUP [BLOB]
ALTER DATABASE DEV
ADD FILE
(
NAME= 'blob',
FILENAME = 'D:\MS SQL\DB\blob.mdf'
)
TO FILEGROUP [BLOB]
Next, I drop clustered index and recrete it, specifying a filegroup name.
ALTER TABLE F
DROP CONSTRAINT [F_PK] WITH (MOVE TO BLOB)
ALTER TABLE F
ADD CONSTRAINT [F_PK] PRIMARY KEY CLUSTERED
(
ID
)
WITH (IGNORE_DUP_KEY = OFF) ON BLOB
CREATE UNIQUE CLUSTERED INDEX F_PK
ON dbo.F(ID)
WITH DROP_EXISTING
ON [BLOB]
Next, I create more then 2k INSERT's queries and full in dbo.F with random binary data.
Question!
Why on this picture my new filegroup's file weighs so little unlike in the default filegroup's file?
Without seeing the full schema of your tables... you only have ID in your clustered index which means that all of the data you inserted is still in your primary filegroup. The only thing in blob is the index of your ID values which I would assume is not going to be anywhere near as large as the binary data you are inserting. I'm basing my assumption on ID being an INT column...
This, of course, is irrelevant if ID is the column in which you are storing your binary data, but I assume that's not the case if you're using it as a PK and clustered index.

Need to replace one table for another (with the same type, but other data) in SQL Server 2014

I made a copy of my table, after this I make some commands on my table (base, not on copy) like insert/delete/update, and I have problem when I want to replace my table for my copy. Select into give me error. When I try to drop table and recreate with copy, I got error that I can't delete table with foreign keys. I don't have any other idea, can somebody help me ? :)
You can't drop table with FK, but you can
Drop the constraints
Create temporary table
Copy your data in the temporary table
Truncate the initial table
Recreate the constraints in the initial table.
Copy your temporary table in your initial table.

Remove identity or formula with out dropping and recreating table

What I want is to set or remove identity or formula for a column of table in sql , sql generate this code :
CREATE TABLE dbo.Tmp_Table
(
a int NOT NULL IDENTITY (1, 1)
) ON [PRIMARY]
GO
ALTER TABLE dbo.Tmp_Table SET (LOCK_ESCALATION = TABLE)
GO
SET IDENTITY_INSERT dbo.Tmp_Table ON
GO
IF EXISTS(SELECT * FROM dbo.[Table])
EXEC('INSERT INTO dbo.Tmp_Table (a)
SELECT a FROM dbo.[Table] WITH (HOLDLOCK TABLOCKX)')
GO
SET IDENTITY_INSERT dbo.Tmp_Table OFF
GO
DROP TABLE dbo.[Table]
GO
EXECUTE sp_rename N'dbo.Tmp_Table', N'Table', 'OBJECT'
I wrote a script to set or remove identity or formula with out dropping a table and recreating it because I don't have all of the column's information such as primary key or foreign key.
(I want to create this such as set NULL or not NULL that you just write a column name and a column type).
How can i do this? Is it possible to do it?
Short answer is that you can't do it easily. If you want to drop an identity specification, you have to drop and re-create the column (which is less invasive than dropping the table, but still a pain). To keep the data you would need to create a copy of the column without the identity specification and then copy the data across before dropping the original column and renaming the new one.
The problem with that is if you have a foreign key referencing that column then you need to drop that first and re-create it afterwards. Luckily you can get all the information about a column by querying the system catalog views. The ones you'd be interested in are:
sys.columns
sys.foreign_keys
sys.foreign_key_columns
You might also want sys.indexes and within that you'll want to check is_primary_key = 1 to get information on a primary key and to get the PK columns you'll need to look at sys.index_columns. I won't detail how to join them all (they are all linked) as you can find that in the MSDN documentation.
It's a proper pain in the neck to get rid of an identity specification, I feel your pain - I've been there myself.

Partition scheme change with clustered Index

I have table which has 600 million records and also has the Partition on PS_TRPdate(TRPDate) column, I want to change it to another Partition PS_LPDate(LPDate).
So I have tried with small amount of data with following steps.
1) Drop the Primary key Constraints.
2) Adding the New Primary Key Clustered Index with new Partition PS_LPDate(LPDate).
Is it Feasible with 600 million records? Can anyone guide me for it?
and How does it works with Non Partitioned Tables?
--343
My gut feeling is that you should create a parallel table using a new primary key, file groups and files.
To test out my assumption, I looked at a old blog post in which I stored the first five million prime numbers into three files / file groups.
I used the TSQL view that Kalen Delaney wrote and I modified to my standards to look at the partition information.
As you can see, we have three partitions based on the primary key.
Next, I drop the primary key on the my_value column, create a new column named chg_value, update it to the prime number, and then try to create a new primary key.
-- drop primary key (pk)
alter table tbl_primes drop constraint [PK_TBL_PRIMES]
-- add new field for new pk
alter table tbl_primes add chg_value bigint not null default (0)
-- update new field
update tbl_primes set chg_value = my_value
-- try to add a new primary key
alter table tbl_primes add constraint [PK_TBL_PRIMES] primary key (chg_value)
First, I was surprise that the partition still stayed together after dropping the PK. However, the view shows the index no longer exists.
Second, I end up receiving the following error during constraint creation.
While you could merge/switch the partitions into one file group which is not part of the scheme, drop/create the primary key, partition function & partition scheme, and then move the data yet again with the appropriate merge/switch statements, I would not.
This will generate a ton of work (TSQL) and cause alot of I/O on the disks.
I suggest you build a parallel partitioned table, if you have space, with the new primary key. Reload the data from the old table to the new.
If you are not using data compression and have the enterprise version of SQL Server, why not save the bytes by turning it on.
Good luck!
John
www.craftydba.com