SQL Primary Key Exception - sql

I have a Mircrosoft Sql Server database made up of about 8 tables that are all related that I am trying to update. To do this I create a number of temporary tables
"CREATE TABLE [vehicle_data].[dbo].[temp_MAINTENANCE_EVENT] (" +
"[maintenance_event_id] int," +
"[maintenance_computer_code_id] int," +
"[veh_eng_maintenance_id] int," +
"CONSTRAINT [PK_maintenance_event_id"] PRIMARY KEY CLUSTERED ([maintenance_event_id] ASC))";
Then after all the temporary tables have been created I drop the existing tables, rename the temporary tables, and add foreign keys and indexing to the new tables to speed up joins and querying.
The issue I'm having is the original primary key references are remaining. So when I go to update again I get
Exception: There is already an object named 'PK_maintenance_event_id'
in the database. Could not create constraint.
I'm wondering what is the best course of action is? Should I not set the primary key when I create the temporary table and instead add it to the table after it has been renamed? Or is there a way to rename constraints so that when I rename the table I can change the name of the primary key constraint.
After the original tables have been dropped I want there to be as little downtime as possible, but anything that happens before the tables have been dropped can take a really long time and it won't matter.

If your temp tables need that constraint
When create
use
CONSTRAINT [PK_maintenance_event_id_temp"]
instead of
CONSTRAINT [PK_maintenance_event_id]
when you rename temp back to real table
exec sp_rename [PK_maintenance_event_id_temp], [PK_maintenance_event_id]

Related

Oracle SQL primary key stuck

I ran into a curious problem. I am creating copies of currently existing tables and adding partitions to them.
The process is as follows:
Rename current constraints (can't drop them without dropping table itself because I will need data later)
Create a new partitioned table that structurally copies current one.
so I have MYTABLE (original) and PART_TABLE (new partitioned), including FKs
Copy data with INSERT INTO SELECT clause
Alter table with indexes and PKs
Rename tables so I end up with MYTABLE (new partitioned) and TRASH_TABLE (original)
In step 4 unfortunately, I got an error
ALTER TABLE MYTABLE ADD CONSTRAINT "PK_MYTABLE"
PRIMARY KEY ("MY_ID", "SEQUENCE")
USING INDEX LOCAL TABLESPACE INDEXSPACE;
SQL Error: ORA-00955: "name is already used by an existing object"
Now, I logically assumed that I simply forgot to rename the PK so I check TRASH_TABLE, but I see there the correctly renamed PK.
I also ran
SELECT *
FROM ALL_CONSTRAINTS
WHERE CONSTRAINT_NAME LIKE 'PK_MYTABLE'
and it returned 0 results. Same with table USER_CONSTRAINTS.
Renamed PKs are displaying correctly.
Another thing I noticed is that only PKs are locked this way. Adding FKs or UNIQUE constraints work just fine.
As stated in How to rename a primary key in Oracle such that it can be reused, the problem is that Oracle creates an index for the primary key. You need to rename the autogenerated index as well. As suggested by Tony Andrews, try
ALTER INDEX "PK_MYTABLE" RENAME TO "PK_MYTABLE_OLD";

Partition scheme change with clustered Index

I have table which has 600 million records and also has the Partition on PS_TRPdate(TRPDate) column, I want to change it to another Partition PS_LPDate(LPDate).
So I have tried with small amount of data with following steps.
1) Drop the Primary key Constraints.
2) Adding the New Primary Key Clustered Index with new Partition PS_LPDate(LPDate).
Is it Feasible with 600 million records? Can anyone guide me for it?
and How does it works with Non Partitioned Tables?
--343
My gut feeling is that you should create a parallel table using a new primary key, file groups and files.
To test out my assumption, I looked at a old blog post in which I stored the first five million prime numbers into three files / file groups.
I used the TSQL view that Kalen Delaney wrote and I modified to my standards to look at the partition information.
As you can see, we have three partitions based on the primary key.
Next, I drop the primary key on the my_value column, create a new column named chg_value, update it to the prime number, and then try to create a new primary key.
-- drop primary key (pk)
alter table tbl_primes drop constraint [PK_TBL_PRIMES]
-- add new field for new pk
alter table tbl_primes add chg_value bigint not null default (0)
-- update new field
update tbl_primes set chg_value = my_value
-- try to add a new primary key
alter table tbl_primes add constraint [PK_TBL_PRIMES] primary key (chg_value)
First, I was surprise that the partition still stayed together after dropping the PK. However, the view shows the index no longer exists.
Second, I end up receiving the following error during constraint creation.
While you could merge/switch the partitions into one file group which is not part of the scheme, drop/create the primary key, partition function & partition scheme, and then move the data yet again with the appropriate merge/switch statements, I would not.
This will generate a ton of work (TSQL) and cause alot of I/O on the disks.
I suggest you build a parallel partitioned table, if you have space, with the new primary key. Reload the data from the old table to the new.
If you are not using data compression and have the enterprise version of SQL Server, why not save the bytes by turning it on.
Good luck!
John
www.craftydba.com

H2 database: Information about primary key in INFORMATION_SCHEMA

I create the following table in H2:
CREATE TABLE TEST
(ID BIGINT NOT NULL PRIMARY KEY)
Then I look into INFORMATION_SCHEMA.TABLES table:
SELECT SQL
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'TEST'
Result:
CREATE CACHED TABLE TEST(
ID BIGINT NOT NULL
)
Then I look into INFORMATION_SCHEMA.CONSTRAINTS table:
SELECT SQL
FROM INFORMATION_SCHEMA.CONSTRAINTS
WHERE TABLE_NAME = 'TEST'
Result:
ALTER TABLE TEST
ADD CONSTRAINT CONSTRAINT_4C
PRIMARY KEY(ID)
INDEX PRIMARY_KEY_4C
These statements are not the ones which I have stated, therefore, the question is:
Is the information in TABLES and CONSTRAINS reflects how real SQL which was executed in database?
In original CREATE TABLE statement
there was no CACHED word. (not a problem)
I have never executed ALTER TABLE .. ADD CONSTRAINT statement.
The actual reason why I am asking the question is that I am not sure which statement should I execute in order to guarantee that primary key is used in a clustered index.
If you look at my previous question H2 database: clustered index support then you may find in the answer of Thomas Mueller the following statement:
If a primary key is created after the table has been created then the primary key is stored in a new index b-tree.
Therefore, if the statements are executed as such they are shown in INFORMATION_SCHEMA, then primary key is created after the table is created and hence ID is not used in a clustered index (basically as a key in a data b-tree).
Is there a way how one can guarantee that primary key is used in a clustered index in H2?
Is the information in TABLES and CONSTRAINS reflects how real SQL which was executed in database?
Yes. Basically, those are the statements that are run when opening the database.
If you look at my previous question
The answer "If a primary key is created after the table has been created..." was incorrect, I fixed it now to "If a primary key is created after data has been inserted...".
Is there a way how one can guarantee that primary key is used as a clustered index in H2?
This is now better described in the H2 documentation at "How Data is Stored Internally": "If a single column primary key of type BIGINT, INT, SMALLINT, TINYINT is specified when creating the table (or just after creating the table, but before inserting any rows), then this column is used as the key of the data b-tree."

Can you have a Foreign Key onto a View of a Linked Server table in SQLServer 2k5?

I have a SQLServer with a linked server onto another database somewhere else. I have created a view on that linked server
create view vw_foo as
select
[id],
[name]
from LINKEDSERVER.RemoteDatabase.dbo.tbl_bar
I'd like to to the following
alter table [baz]
add foo_id int not null
go
alter table [baz] with check
add constraint [fk1_baz_to_foo]
foreign key([foo_id])
references [dbo].[vw_foo] ([id])
go
But that generates the error: "Foreign key 'fk1_baz_to_foo' references object 'dbo.vw_foo' which is not a user table."
If I try and put the foreign key directly onto the table using the following
alter table [baz] with check
add constraint [fk1_baz_to_bar]
foreign key([foo_id])
references LINKEDSERVER.RemoteDatabase.dbo.tbl_bar ([id])
Then I get the following error:
The object name 'LINKEDSERVER.RemoteDatabase.dbo.tbl_bar' contains more than the maximum number of prefixes. The maximum is 2.
Is there any way I can achieve the same effect?
Foreign keys can't be connected to non-local objects - they have to reference local tables. You get the "maximum number of prefixes" error because you're referencing the table with a 4-part name (LinkedServer.Database.Schema.Object), and a local object would only have a 3-part name.
Other solutions :
Replicate the data from the source (the location of the view) to the same server as the table you're trying to add the key on. You can do this hourly, daily, or whatever, depending on how often the source data changes.
Add a trigger on the source table to push any changes to your local copy. This would essentially be the same as #1, but with immediate population of changes
Add an INSTEAD OF" trigger to your table that manually checks the foreign key constraint by selecting from the linked server and comparing the value you're trying to INSERT/UPDATE. If it doesn't match, you can reject the change.
You can, but you have to use some dynamic SQL trickery to make it happen.
declare #cmd VARCHAR(4000)
SET #cmd = 'Use YourDatabase
ALTER TABLE YourTable
DROP CONSTRAINT YourConstraint'
exec YourServer.master.dbo.sp_executesql #SQL
No, foreign keys have to be made against user tables. Have you tried the below?
alter table [baz] with check
add constraint [fk1_baz_to_foo]
FOREIGN KEY([foo_id])
references
LINKEDSERVER.RemoteDatabase.dbo.tbl_bar([id])
go

Is it possible to change SQL user-defined data type?

I have a bunch of tables using user-defined data type for PK column. Is it possible to change this type Using SQL Server 2005?
I would suggest that it is always possible to refactor poor or outmoded database designs, it simply depends on how much work you are willing to go to in order to do so.
If you are looking to replace the user-defined data with a surrogate key then you should be able to simply alter the existing table to contain a non-nullable identity column and this should cause all of the existing records to be assigned a new key automatically.
Once the new field is populated with unique id's, if you need to move out and replace foreign key references to this table, then I would simply alter those tables to contain the new field and use something like the following:
UPDATE child_table
SET new_fk_val =
SELECT new_pk_val
FROM parent_table
WHERE parent_table.old_pk_val = child_table.old_fk_val
Once that step is complete, then you could drop the old foreign key constraint, drop the old foreign key column, drop the old primary key column, establish the new primary key constraint, and then establish the new foreign key constraint.
Of course, if the old version of the parent and child tables relationship was such that you have invalid records in the child table you may have to do something like the following:
DELETE FROM child_table
WHERE old_fk_val NOT IN
( SELECT old_pk_val FROM parent_table)